paper_id
stringlengths 19
21
| paper_title
stringlengths 8
170
| paper_abstract
stringlengths 8
5.01k
| paper_acceptance
stringclasses 18
values | meta_review
stringlengths 29
10k
| label
stringclasses 3
values | review_ids
sequence | review_writers
sequence | review_contents
sequence | review_ratings
sequence | review_confidences
sequence | review_reply_tos
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_sFQJ0IOkHF | DivBO: Diversity-aware CASH for Ensemble Learning | The Combined Algorithm Selection and Hyperparameters optimization (CASH) problem is one of the fundamental problems in Automated Machine Learning (AutoML). Motivated by the success of ensemble learning, recent AutoML systems build post-hoc ensembles to output the final predictions instead of using the best single learner. However, while most CASH methods focus on searching for a single learner with the best performance, they neglect the diversity among base learners (i.e., they may suggest similar configurations to previously evaluated ones), which is also a crucial consideration when building an ensemble. To tackle this issue and further enhance the ensemble performance, we propose DivBO, a diversity-aware framework to inject explicit search of diversity into the CASH problems. In the framework, we propose to use a diversity surrogate to predict the pair-wise diversity of two unseen configurations. Furthermore, we introduce a temporary pool and a weighted acquisition function to guide the search of both performance and diversity based on Bayesian optimization. Empirical results on 15 public datasets show that DivBO achieves the best average ranks (1.82 and 1.73) on both validation and test errors among 10 compared methods, including post-hoc designs in recent AutoML systems and state-of-the-art baselines for ensemble learning on CASH problems. | Accept | After a thorough discussion with the authors, all reviewers agree that the paper should be accepted at NeurIPS. The reviewers appreciated the idea of incorporating diversity in the combined algorithm selection and hyper-parameter optimization (CASH) framework and the subsequent use of the diverse models in an ensemble to improve performance. The paper is very clearly written, and the experimental evaluation shows that the proposed techniques provides small but consistent improvements in performance. The authors provided a comprehensive response where they addressed most of the reviewers' concerns. I expect the authors will incorporate all the new results in the camera ready version of the paper. | train | [
"4fYX79ntpM",
"l04QgmfZ4X3",
"3b50w2ZTmPQ",
"M5NO6EV9AsM",
"gKKJGLUaEbF",
"E_e0H-wh8lu",
"SvCAlHrvPik",
"vCL0p_5NGJw",
"s1A-mARzfO8",
"cK5D5rKp9ZH",
"N_d3oCuQXkC",
"re3QDKRZGU",
"An7NtAp-KSi",
"KRMaMlyr-m0",
"3FXOy6Al1b",
"Dg_A3p0_NDx",
"xGsbS6XKhBl",
"RcP-nahbnPj",
"NdCLAPKMxW5"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the explanations in AQ2 and AQ4 and the new results in AQ3 and AQ5. AQ5 definitely makes sense. \n\nFollow up on AQ3. It is somewhat surprising to see that we need $\\beta$ to be so small otherwise diversity hurts more than it helps. It is somewhat disappointing that the crux of this paper is that diversity helps, but in practice, it appears that only a little bit of diversity helps, otherwise it hurts and is worse than no explicit focus on diversity. That is somewhat of a disappointing outcome. While the authors mention that $\\tau$ might need to be tuned, the results in the above tables seem to indicate that changes for any given $\\beta$ is within a 1%, especially for small values of $\\beta$ (which is the recommended value). \n\nHowever, this also highlights the amount of work needed to take advantage of diversity and highlights the contribution of this paper. ",
" ### AQ4. Explanation of weight scheduling function.\nWe want to note that 3.4/100 is a relatively large update number for BO-based methods. Here, we provide the update number of standard BO **without ensemble**. We count the number of iterations if standard BO finds a configuration with better single-learner accuracy than the best-observed learner. On the same dataset, the number is only 2.3/100. The main reason is that, **the configuration suggested by BO does not guarantee to perform better than previously observed ones, even when the target is to optimize accuracy only.** For example, if the configuration is of high uncertainty, it is also likely to be chosen. Compared with 2.3 in standard BO, the update number of 3.4 is relatively large.\n\nIn addition, we agree that the configurations suggested by DivBO in later iterations have better diversity but worse accuracy compared with BO. As also mentioned by the reviewer, if the accuracy is too low, the configuration may not be selected into the ensemble though it is diverse. To prevent the accuracy from being too low, we highlight the use of $\\beta$ in the weight scheduling function, which controls the maximum weight for diversity. While the weight for accuracy is constantly 1, the weight for diversity continuously increases during optimization (**more important given more iterations**). But it increases saturatedly and will not exceed $\\beta$. Interestingly, we observe that when $\\beta>0.2$, there is a clear accuracy drop (see AQ3 above), which may be the consequence of paying too much attention to diversity as mentioned. By default, we set $\\beta$ to be 0.05, so that accuracy also plays an important role while the importance of diversity grows larger. We refer to AQ3 for a detailed analysis of the weight scheduling function. \n\n### AQ5. Additional results of larger ensemble size.\nThe reviewer may suggest analyzing the results if we set a larger ensemble size. As ensemble selection **directly** optimizes the performance on the validation set, **the validation performance is definitely no worse than using a smaller ensemble size** due to the greedy mechanism. However, as pointed out by \\[1\\], if we optimize the validation set too much (i.e., setting a too large ensemble size for ensemble selection), **the test results may deteriorate**, which is referred to as the overfitting issue in AutoML. The results when setting the ensemble size to 100 for BO-ES are as follows,\n\n| Test Errors (%) | elevators | house_8L | pol | quake | wind |\n| - | :-: | :-: | :-: | :-:| :-: |\n| BO-ES (ens_size=100) | 9.98±0.30 | 11.52±0.26 | 1.45±0.33 | 47.43±1.62 | 14.04±0.47 |\n| BO-ES (ens_size=25) | **9.61±0.36** | **11.06±0.33** | **1.35±0.18** | **46.10±2.52** | 14.04±0.53 |\n\nIn our paper, the ensemble size is set to 25 following VolcanoML \\[2\\], which shows good empirical results across different datasets. We observe that when we set the ensemble size to 100 for BO-ES, the test results are generally worse than setting the ensemble size to 25 due to the overfitting issue (not significant on wind but significant on the other four). We have also mentioned this risk of overfitting in the limitation (see “Response Part3, L1”).\n\n\\[1\\] F. Hutter, L. Kotthoff, and J. Vanschoren. Automated machine learning: methods, systems, challenges. Springer Nature, 2019.\n\n\\[2\\] Y. Li, Y. Shen, W. Zhang, J. Jiang, B. Ding, Y. Li, J. Zhou, Z. Yang, W. Wu, C. Zhang, et al. Volcanoml: speeding up end-to-end automl via scalable search space decomposition. Proceedings of the VLDB Endowment, 14(11):2167–2176, 2021.",
" ### AQ1. Diversity function.\nYes. We do not propose a new diversity function but apply the one with potentially good empirical performance from Zhang et al.\n\n### AQ2. Settings of significance test.\nWhile the standard deviation is relatively large on some datasets, we think it’s necessary to compare the methods on each dataset. For each dataset, we collect the results of the given two methods across $R=10$ repetitions. And then we conduct the Wilcoxon signed rank test on those pairs (totally $R$ pairs for each dataset). If the difference is not significant, we report an **S** (same). If the difference is significant, and the mean results of DivBO are larger, we report a winning case. If the difference is significant, and the mean results of DivBO are lower, we report a losing case. The setting is similar to that of the significance test in EO \\[1\\] (see Tables 3 and 5 in EO). Also, the optimization results are the same as those used in Table 3 in our paper. In other words, we conduct the significance test on the optimization results while we present the mean and standard deviation in Table 3 based on the same results. We will clarify the settings in the final manuscript.\n\n\\[1\\] J.-C. Lévesque, C. Gagné, and R. Sabourin. Bayesian hyperparameter optimization for ensemble learning. In Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, pages 437–446, 2016.\n\n### AQ3. Parameter sensitivity.\nWe agree that weight scheduling is important for DivBO and sometimes it should be regarded as “hyperparameters”. To analyze how $\\beta$ and $\\tau$ affect the performance of DivBO, we present the sensitivity analysis. As suggested by Reviewer 15EU, we have improved this part. The sensitivity analysis on two datasets (spambase and House_8L) is as follows,\n\n| Spambase | $\\tau$=0.05 | $\\tau$=0.1 | $\\tau$=0.2 | $\\tau$=0.4 | $\\tau$=0.8 |\n| - | :-: | :-: | :-: | :-: | :-:|\n| $\\beta$=0.025 | 96.59 | 96.30 | 96.20 | *96.67* | 96.34 |\n| $\\beta$=0.05 | 96.12 | *96.59* | **96.78** | *96.74* | 96.41 |\n| $\\beta$=0.1 | 96.01 | 95.98 | 96.27 | 96.23 | 96.30 |\n| $\\beta$=0.2 | 95.76 | 95.80 | 95.80 | 95.76 | 95.68 |\n\n\n| House_8L | $\\tau$=0.05 | $\\tau$=0.1 | $\\tau$=0.2 | $\\tau$=0.4 | $\\tau$=0.8 |\n| - | :-: | :-: | :-: | :-: | :-:|\n| $\\beta$=0.025 | 89.60 | *89.98* | *89.99* | 89.65 | 90.13 |\n| $\\beta$=0.05 | 89.66 | 89.96 | *90.10* | **90.40** | *89.95* |\n| $\\beta$=0.1 | 89.96 | 89.76 | 89.25 | 89.83 | *89.98* |\n| $\\beta$=0.2 | 89.37 | 89.40 | 89.60 | 89.34 | 89.17 |\n\nRemind that $\\beta$ is the maximum of diversity importance and $\\tau$ controls the speed of approaching saturation. We observe that a large $\\beta$ (0.2) leads to a clear accuracy drop, and we suggest using a $\\beta=0.05$. However, we need to tune $\\tau$ to achieve the best results on different datasets. The reason may be that the difficulty for different datasets to find good configurations are different. As DivBO builds on the intuition that we need to focus on accuracy rather than diversity in early iterations, a smaller $\\tau$ is required if it's difficult to find accurate learners in early iterations. The suggested region for tuning $\\tau$ is \\[0.1,0.8\\]. In our paper, we use 0.2 by default, but a tuned $\\tau$ may achieve better results. We will update the sensitivity analysis and add the analysis in Section 3.2.",
" Thank you for the new results and the breakdowns of the original results. I really appreciate the detailed explanations for my different questions regarding the acquisition function and the surrogate fitting.\n\n\nI had a couple of follow-up questions:\n\n- So if I understand correctly, the definition of diversity is not novel but rather from Zhang et al. (2020). Is that a correct assessment? \n\n- For the B/S/W (or Wins/Ties/Losses) comparison, how is the Wilcoxon signed rank test computed between two schemes on the same dataset? Usually, we would compare two schemes across datasets to compute the statistical significance of the difference between the two and/or the direction of the difference. Moreover, is this a one-sided or two-sided test? Also, are these B/S/W numbers corresponding to the numbers in Table 3 of the original paper?\n\n- Thank you for the very nice ablation study on the weight scheduling. This seems to show that the weight-scheduling is critical to being able to outperform existing non-diverse baselines. For the 5 numbers posted DivBO with fixed weight, in some cases, DivBO has worse performance than all ensembling baseline. Given how critical the weight scheduling is for the DivBO performance, is this weight schedule now a \"hyperparameter\" that needs to be carefully handled to get improvements over the non-diverse schemes? Or is DivBO robust to the weight scheduling as long as there is a weight scheduling?\n\n- Thank you for the diversity analysis. It is interesting to look at the number of pool updates and the final disagreement in the ensemble. It is a bit counter intuitive that, towards the end of the optimization, when diversity has the highest weight, we are not seeing a larger number of pool updates with diverse candidates -- only 3.4/100. It is higher than the baselines but they are not doing anything to maximize diversity. It might be the case that, towards the end, the algorithm is finding diverse candidates but they are not accurate enough to enter the pool. But that then defeats the point of having diversity aware search. If diversity does not play a large role towards the end, then does the weight schedule seem counter-intuitive? What is the part of the optimization when diversity plays the largest role?\n\n- One final question is that would the baselines like RS-ES or BO-ES be able to catch up with DivBO if they are just given a larger pool of models for the final ensemble selection (for example the whole 250 models attempted during the optimization)? If the pool is larger, there might be (but not necessarily) more diverse models, and they would be selected if they provide any gains.\n\nW. Zhang, J. Jiang, Y. Shao, and B. Cui. Efficient diversity-driven ensemble for deep neural networks. In 2020 IEEE 36th International Conference on Data Engineering (ICDE), pages 73–84. IEEE, 2020.\n",
" I would like to thank the authors for their impressive rebuttal. I have read your rebuttal, the other reviews and your answers to those. \n\nThanks for producing the ranking by test accuracy / error, this is a much better metric (actually the only valid metric). I hope you will replace the lines in Figure 3 by the test rankings. \n\nI will maintain my score.",
" Thank you for the detailed response, I will increase my score.",
" Thank you for addressing all my comments. I will update my score accordingly.",
" ### Q4. Explanation of captions. \n‘-d’ means the correlation of the diversity surrogates with ground-truth diversity, while ‘-p’ means the correlation of the performance surrogate with ground-truth performance. We will add more explanations.\n\n\n### L1. Potential overfitting issue.\nAs suggested, we will highlight the issue in the limitation (see L2 below). \n\n### L2. Limitation.\nTo clarify the limitation, we will add a paragraph in Section 4.4. The updated limitations will be as follows,\n\nLimitation. a) The use of ensemble leads to higher inference latency than using the single best learner (approximately K times where K is the number of learners in the ensemble). This latency can be reduced with the aid of parallel computing if we have sufficient computational resources. In addition, as ensemble selection is fitted on the validation set, there's a risk of overfitting on the test set for small datasets. b) DivBO using Equation 3 as the diversity function can not directly support algorithms that only predict class labels (e.g., SVC). Though DivBO still works by converting the predicted labels to class probability (like [1, 0, …]), other diversity functions can be developed to support those algorithms better. ",
" # Response to Reviewer anrp\nThanks for your constructive feedback! We believe that addressing this feedback will make our paper significantly stronger. The detailed response to each question is as follows,\n\n### Q1. Results on more datasets.\nAs suggested, we add two more datasets from OpenML which contain more than 20k instances (40768 for 2dplanes and 32769 for amazon_employee). We provide the test errors (%) of competitive post-ensemble methods as follows,\n\n| | RS-ES | BO-ES | RB-ES| DivBO|\n| - | :-: | :-: | :-: | :-: |\n| 2dplanes | 7.11±0.12 | 7.07±0.08 | 7.20±0.08 | **7.00±0.08** |\n| amazon_employee | 5.29±0.15 | 5.25±0.15 | 5.21±0.11 | **5.16±0.09** |\n\n\nWhile the results on amazon_employee are not significant, we apply the Wilcoxon signed-rank test and the p-value is 0.04 (< the threshold 0.05), which means the improvement of DivBO over RB-ES is statistically significant. We will definitely add them to our experiments in the final manuscript.\n\n### Q2. Auto-Gluon Tabular as a new baseline.\nThe search space plays a significant role in CASH optimization. As DivBO is an algorithm framework rather than a system, we compare it with other baselines by using the same search space (i.e., auto-sklearn space). However, AutoGluon applies a more compact space than auto-sklearn, and it's not fair to directly compare DivBO on auto-sklearn search space with AutoGluon. We agree with the reviewer that Auto-Gluon Tabular is a strong baseline, and should be compared in the paper. To make a relatively fair comparison, we reproduce a similar search space of AutoGluon except for the specified neural networks due to implementation difficulty. The results on five datasets are as follows,\n\n| Test Errors (%) | elevators | house_8L | pol | quake | wind |\n| - | :-: | :-: | :-: | :-:| :-: |\n| AutoGluon Tabular | 9.10±0.00 | **9.98±0.00** | 1.23±0.00 | 44.72±0.00 | 14.37±0.00 |\n| DivBO (AutoGluon space) | **9.01±0.11** | 10.06±0.17 | **1.18±0.07** | 44.75±0.60 | **14.24±0.18** |\n\n\nNote that, the search space affects the results a lot. For example, AutoGluon's results on wind are worse than RS-ES using the auto-sklearn space. However, AutoGluon's results on the other four datasets are better than most of the results using the auto-sklearn space, which is consistent with the observation that AutoGluon often outperforms auto-sklearn. The reason may be that AutoGluon is equipped with a well-designed search space, which kicks out less reliable algorithms on modern datasets (e.g., Naive Bayes) and adds strong ones (e.g., Catboost). Note that, the variance of AutoGluon's results is zero because it fixes the random seed in its inner design. In addition, we observe an error decrease when using DivBO in this search space. Concretely, the improvement is **statistically** significant on three datasets, not significant on one (quake), and slightly worse on the other one (house_8L).\n\n### Q3. Explanation of ensemble diagnostics.\nThe sentence “We also find that DivBO- performs worse than BO without ensemble learning” means that the best learner found by DivBO- is worse than BO. This is not contradictory to the results that DivBO performs better than BO-ES. The performance of an ensemble does not depend on the best observed learner alone, but on all the learners in the ensemble. The average learner performance is a metric that measures the overall strength of base learners in the ensemble, which is also used in previous ensemble learning work \\[1\\] as one of the ensemble diagnostics.\n\nWe also apply the predictive disagreement \\[2\\] to analyze the diversity of the ensemble. Given two learners, the pair-wise predictive disagreement computes the ratio of disagreed instances for all instances. (Disagreement happens when a learner classifies correctly but the other does not.) The disagreement for an ensemble is computed by averaging the pair-wise disagreement given all pairs of learners in the ensemble. Generally, the larger disagreement is, the more diverse the learners in the ensemble are. We compute the disagreement at the 250-th iteration in Figure 5, and the mean results are as follows, \n\n| | BO-ES | RB-ES | DivBO |\n| - | :-: | :-: | :-: |\n| Disagreement | 0.09 | 0.13 | 0.27 |\n\nWhile the average performance of base learners is similar, the ensemble built by DivBO enjoys a higher disagreement value (diversity), thus achieving better test accuracy.\n\n[1] S. Zaidi, A. Zela, T. Elsken, C. C. Holmes, F. Hutter, and Y. Teh. Neural ensemble search for uncertainty estimation and dataset shift. Advances in Neural Information Processing Systems, 34, 2021.\n\n[2] Tang, E. Ke, Ponnuthurai N. Suganthan, and Xin Yao. \"An analysis of diversity measures.\" Machine learning 65.1 (2006): 247-271.",
" ### Q3. Significance test.\nTo check whether the improvement of DivBO is statistically significant, we conduct the Wilcoxon signed-rank test on each dataset given two methods. The difference is significant when the value $p\\leq0.05$ \\[1\\]. We count the number of datasets if 1) DivBO is better than the other method, and the difference is statistically significant (**B**); 2) the difference is not statistically significant (**S**); and 3) DivBO is worse, and the difference is statistically significant (**W**). For each dataset, the rank of DivBO is 1 for **B**, 1.5 for **S**, and 2 for **W**. We compute the pair-wise rank by averaging the rank on 15 datasets. The results are presented as follows,\n\n**DivBO vs. RB-ES.** The pair-wise ranks of DivBO and RB-ES on all datasets are 1.33 and 1.67, respectively. We agree that RB-ES is a strong baseline. Through the significance test, we observe that DivBO performs no worse than RB-ES on 12 datasets and better on 8 datasets.\n\n\n| | B | S | W |\n| - | :-: | :-: | :-: |\n| DivBO | 8 | 4 | 3 |\n\n**DivBO vs. BO-ES.** The pair-wise ranks of DivBO and BO-ES on all datasets are 1.17 and 1.83, respectively. While the DivBO framework is extended from BO, DivBO generally performs better than BO. Concretely, DivBO performs no worse than BO-ES on 13 datasets and better on 12 datasets.\n\n\n| | B | S | W |\n| - | :-: | :-: | :-: |\n| DivBO | 12 | 1 | 2 |\n\n**DivBO vs. RS-ES.** The pair-wise ranks of DivBO and RS-ES on all datasets are 1.07 and 1.93, respectively. DivBO performs no worse than RS-ES on all datasets.\n\n\n| | B | S | W |\n| - | :-: | :-: | :-: |\n| DivBO | 13 | 2 | 0 |\n\n\\[1\\] Wilcoxon, Frank. Individual comparisons by ranking methods. Breakthroughs in statistics. Springer, New York, NY, 1992. 196-202.\n\n### Q4. Sensitivity analysis.\nAs suggested by the reviewer, we extend the 4\\*4 grid to the 4\\*5 grid for sensitivity analysis and add another dataset. The results are as follows, \n\n| Spambase | $\\tau$=0.05 | $\\tau$=0.1 | $\\tau$=0.2 | $\\tau$=0.4 | $\\tau$=0.8 |\n| - | :-: | :-: | :-: | :-: | :-:|\n| $\\beta$=0.025 | 96.59 | 96.30 | 96.20 | *96.67* | 96.34 |\n| $\\beta$=0.05 | 96.12 | *96.59* | **96.78** | *96.74* | 96.41 |\n| $\\beta$=0.1 | 96.01 | 95.98 | 96.27 | 96.23 | 96.30 |\n| $\\beta$=0.2 | 95.76 | 95.80 | 95.80 | 95.76 | 95.68 |\n\n\n| House_8L | $\\tau$=0.05 | $\\tau$=0.1 | $\\tau$=0.2 | $\\tau$=0.4 | $\\tau$=0.8 |\n| - | :-: | :-: | :-: | :-: | :-:|\n| $\\beta$=0.025 | 89.60 | *89.98* | *89.99* | 89.65 | 90.13 |\n| $\\beta$=0.05 | 89.66 | 89.96 | *90.10* | **90.40** | *89.95* |\n| $\\beta$=0.1 | 89.96 | 89.76 | 89.25 | 89.83 | *89.98* |\n| $\\beta$=0.2 | 89.37 | 89.40 | 89.60 | 89.34 | 89.17 |\n\nRemind that $\\beta$ is the maximum of diversity importance and $\\tau$ controls the speed of approaching saturation. We observe that a large $\\beta$ (0.2) leads to a clear accuracy drop, and we suggest using a $\\beta=0.05$. However, we need to tune $\\tau$ to achieve the best results on different datasets. The reason may be that the difficulty for different datasets to find good configurations are different. As DivBO builds on the intuition that we need to focus on accuracy rather than diversity in early iterations, a smaller $\\tau$ is required if it's difficult to find accurate learners in early iterations. The suggested region for tuning $\\tau$ is \\[0.1,0.8\\]. In our paper, we use 0.2 by default, but a tuned $\\tau$ may achieve better results. As suggested, we will update the sensitivity analysis and add the analysis in Section 3.2.\n\n### Q5. Limitation.\nTo clarify the limitation, we will add a paragraph in Section 4.4. The updated limitations will be as follows,\n\nLimitation. a) The use of ensemble leads to higher inference latency than using the single best learner (approximately K times where K is the number of learners in the ensemble). This latency can be reduced with the aid of parallel computing if we have sufficient computational resources. In addition, as ensemble selection is fitted on the validation set, there's a risk of overfitting on the test set for small datasets. b) DivBO using Equation 3 as the diversity function can not directly support algorithms that only predict class labels (e.g., SVC). Though DivBO still works by converting the predicted labels to class probability (like [1, 0, …]), other diversity functions can be developed to support those algorithms better.",
" Thanks for your constructive feedback! We believe that addressing this feedback will make our paper significantly stronger. The detailed response to each question is as follows,\n\n### Q1. A new baseline.\nThanks for the suggestion. We evaluate the simplest version as suggested, which tunes each algorithm for the same budget and then builds a post-hoc ensemble. In fact, it is a simplified version of the baseline RB-ES, in which RB-ES eliminates some of the algorithms after several iterations. We name it kBO-ES and present the results as follows,\n\n| Test Errors (%) | elevators | house_8L | pol | quake | wind |\n| - | :-: | :-: | :-: | :-:| :-: |\n| RS-ES | 9.51±0.28 | 11.21±0.38 | 1.39±0.15 | 46.79±1.57 | 14.34±0.47 |\n| kBO-ES | 9.55±0.32 | 11.18±0.34 | 1.39±0.16 | 46.81±1.48 | 14.29±0.45 |\n| DivBO | **9.40±0.28** | **10.80±0.22** | **1.34±0.17** | **45.55±1.37** | **13.93±0.42** |\n\nWe observe that the results of kBO-ES are quite similar to RS-ES (Random search with ensemble selection). The reason is that the search space contains a lot of algorithms while the budget is quite limited (250 iterations). Each algorithm can only be tuned about 22 times. For each algorithm, we also need to tune the feature engineering operators (>50 HPs in auto-sklearn search space), and thus the BO surrogate for each algorithm is under-fitted. Therefore, Bayesian optimization for each algorithm performs like random search. kBO-ES is an intuitive method but seems to perform not competitively when the search space is very large.\n\n### Q2. Experiments on diversity.\nAs suggested by the reviewer, we add the experiments on the influence of removing the most diverse learners from the final ensemble. While DivBO is extended from naive BO, we evaluate the mean influence of removing the top-3 diverse models from the final ensemble. As the ensemble is built on the validation set, we present the validation error gaps on five datasets as follows, (a positive gap means removing the models leads to an error increase)\n\n| Gap (%) | elevators | house_8L | pol | quake | wind |\n| - | :-: | :-: | :-: | :-:| :-: |\n| BO-ES | +0.21 | -0.04 | +0.02 | +0.98 | +0.38 |\n| DivBO | +0.38 | +0.15 | +0.01 | +1.28 | +0.61 |\n\nWe observe that generally, removing the most diverse learners from DivBO leads to a larger error increase than BO-ES. As it's easy to learn good learners on the dataset pol (the accuracy of almost all the learners in the ensemble is above 98%), removing learners affects quite little on the ensemble performance.\n\nBesides the experiments suggested by the reviewer, we also add an overall analysis of the final ensemble using ensemble diagnostics. We apply the predictive disagreement[1] to further analyze the diversity of the ensemble. Given two learners, the pair-wise predictive disagreement computes the ratio of disagreed instances for all instances. (Disagreement happens when a learner classifies correctly but the other does not.) The disagreement for an ensemble is computed by averaging the pair-wise disagreement given all pairs of learners in the ensemble. Generally, the larger disagreement is, the more diverse the learners in the ensemble are. We compute the disagreement at the 250-th iteration in Figure 5, and the mean results are as follows, \n\n| | BO-ES | RB-ES | DivBO |\n| - | :-: | :-: | :-: |\n| Disagreement | 0.09 | 0.13 | 0.27 |\n\nWhile the average performance of base learners is similar, the final ensemble built by DivBO enjoys a higher disagreement value (diversity), thus achieving better test accuracy.",
" ### Q2. Re-organization.\nThanks for the suggestion. We will remove some details of how we fit the diversity surrogate to the appendix. And as suggested, we will add a sensitivity check in the main paper (move from the appendix) and analyze the effects of $\\beta$ and $\\tau$. In addition, we will add more experiments, including the significance test (see W3 above) and experiments related to diversity (see W4 above).\n\n\n### Q3. Explanation of the acquisition function.\nThe main reason we design the acquisition function is to utilize the variance of the diversity surrogate. If we use a minimum of per-theta-sample-means as suggested, we are approximately comparing the mean value of $M_{div}(\\theta, x)$ but ignore the variance, and the most similar learner is the one with the largest mean diversity value. If we use a mean over minimum-of-per-theta as used in DivBO, a base learner from the temporary pool that has low diversity mean but high variance might also be competitive, and this method highlights the uncertainty of the diversity surrogate. \n\nA similar sampling procedure is applied in an HPO transfer learning work RGPE[1], where they also encourage surrogates with high predictive variance to be sampled. This design may not influence the experiments a lot, but matches the intuition that the uncertainty of the diversity surrogate should also be considered. \n\n[1] Feurer, Matthias, Benjamin Letham, and Eytan Bakshy. Scalable meta-learning for bayesian optimization using ranking-weighted gaussian process ensembles. AutoML Workshop at ICML. Vol. 7. 2018.\n\n### Q4. Explanation of baselines.\nBO is exactly DivBO where w equals 0. More precisely, DivBO is an extension of BO by additionally considering diversity when choosing the next configuration to evaluate.\n\n### Q5. Explanation of surrogate fitting. \nThe correlation of the diversity surrogate is still increasing after 250 iterations. Concretely, at the 300-th iteration, the correlation of LightGBM surrogate reaches 0.69 and 0.62 on quake and wind, respectively. The reasons why the correlation is not much higher may be: 1) The ground-truth diversity observations are noisy. Training models based on a given configuration is not deterministic, i.e., the prediction of an instance may be different when training twice based on the same configuration. 2) The Kendall Tau correlation in Figure 2 is computed based on **the predictive means** and the ground-truth diversity observations. Though using predictive means in the experiment can show the effectiveness of the diversity surrogate, it ignores the predictive variance, and the predictive mean may not be precise when the diversity prediction of a configuration pair is of high variance. We will add the above analysis to the experiments.\n\n### Q6. Explanation of diversity function.\nThanks for pointing out the problem. In DivBO, we convert the predicted labels of SVCs to class probability (like [1, 0, …]) to prevent errors. We agree that new diversity functions can be applied to support those types of algorithms better, and we consider it a limitation of the current version of DivBO. (See the limitation part below)\n\n### Q7. Explanation of Figure 6.\nThe ensemble selection applies a selection procedure with replacement (as shown in Appendix A.1), which means that some of the learners selected for the final ensemble may be duplicates. We demonstrate the diversity of unique base learners so that the number of base learners seems different.\n\n### L1. Limitation.\nTo clarify the limitation, we will add a paragraph in Section 4.4. The updated limitations will be as follows,\n\nLimitation. a) The use of ensemble leads to higher inference latency than using the single best learner (approximately K times where K is the number of learners in the ensemble). This latency can be reduced with the aid of parallel computing if we have sufficient computational resources. In addition, as ensemble selection is fitted on the validation set, there's a risk of overfitting on the test set for small datasets. b) DivBO using Equation 3 as the diversity function can not directly support algorithms that only predict class labels (e.g., SVC). Though DivBO still works by converting the predicted labels to class probability (like [1, 0, …]), other diversity functions can be developed to support those algorithms better.",
" ### W4. Diversity analysis.\n\n#### Predictive performance of suggested configurations\nAs suggested by the reviewer, we first provide the validation errors of configurations (without ensemble) suggested by BO-ES, RB-ES, and DivBO during the last 50 iterations on quake (the same settings as shown in Figure 5). Note that, here we present the errors but not the best observed errors. The variance is **very large** due to the high randomness of Bayesian optimization when balancing exploration and exploitation. The results are as follows,\n\n| | BO-ES | RB-ES | DivBO |\n| - | :-: | :-: | :-: |\n| Val Errors (%) | 44.03±2.58 | 43.85±2.49 | 44.33±2.67 |\n\nThe results are consistent with Figure 3 that, without ensemble, the single learner suggested by DivBO- performs worse than BO and RB. Note that, this does not mean that DivBO suggests bad configurations. We randomly evaluate 300 configurations from the search space. The mean result of those diverse configurations is better than 88% of the random configurations. Below, we analyze the effects of suggesting those diverse configurations on the ensemble.\n\n#### Effective update frequency of ensemble\nAs mentioned by the reviewer, a very large proportion of configurations suggested during the search process will not affect the final ensemble. It's difficult to directly build the relationship between each suggested configuration and the final ensemble. To show how the diversity during the search process affects the ensemble, we use the update times of the temporary pool as a metric. \n\nAs mentioned in the paper, the temporary pool is built in the same way as the final ensemble. In other words, the temporary pool is the final ensemble if the optimization stops at the previous iteration. If the temporary pool changes, the configuration suggested at the previous iteration is included in the pool. In short, a change of the temporary pool at least indicates the suggested configuration affects the current ensemble. However, though the pool changes, the performance may not be improved due to the greedy mechanism, and thus we count the effective update times (i.e., the pool changes **and the validation error of the ensemble decreases**).\n\nAs the pool updates very frequently in the beginning, we only calculate the mean effective update times of DivBO, RB-ES, and BO-ES on all datasets during the last 50 and 100 iterations (see the table below). The pool is relatively stable in the last 50 iterations, which also indicates a budget of 250 iterations is sufficient for the datasets. We observe that, on average, DivBO will improve the temporary pool more than once in the last 50 iterations. While the difference between BO-ES and DivBO is the diversity part (also see Q4), we attribute this frequency gain to the use of diversity during the search process.\n\n| | BO-ES | RB-ES | DivBO |\n| - | :-: | :-: | :-: |\n| Counts (last 100) | 1.8 | 2.1 | 3.4 |\n| Counts (last 50) | 0.6 | 0.8 | 1.5 |\n\n#### Diversity analysis of the final ensemble\nIn addition, we also apply the predictive disagreement[1] to further analyze the diversity of the ensemble. Given two learners, the pair-wise predictive disagreement computes the ratio of disagreed instances for all instances. (Disagreement happens when a learner classifies correctly but the other does not.) The disagreement for an ensemble is computed by averaging the pair-wise disagreement given all pairs of learners in the ensemble. Generally, the larger disagreement is, the more diverse the learners in the ensemble are. We compute the disagreement at the 250-th iteration in Figure 5, and the mean results are as follows, \n\n| | BO-ES | RB-ES | DivBO |\n| - | :-: | :-: | :-: |\n| Disagreement | 0.09 | 0.13 | 0.27 |\n\nWhile the average performance of base learners is similar, the final ensemble built by DivBO enjoys a higher disagreement value (diversity), thus achieving better test accuracy.\n\n[1] Tang, E. Ke, Ponnuthurai N. Suganthan, and Xin Yao. \"An analysis of diversity measures.\" Machine learning 65.1 (2006): 247-271.\n\n### Q1. Ablation study.\nAs suggested, we add the results by setting $w=0.1$. The results on five datasets are as follows,\n\n| Test Errors (%) | elevators | house_8L | pol | quake | wind |\n| - | :-: | :-: | :-: | :-:| :-: |\n| DivBO (fixed) | 9.59±0.30 | 11.51±0.28 | 1.66±0.19 | 45.63±1.45 | 14.27±0.36 |\n| DivBO | **9.40±0.28** | **10.80±0.22** | **1.34±0.17** | 45.55±1.37 | **13.93±0.42** |\n\nThe results show that DivBO with weight schedule (Equation 5) performs much better than fixing the weight for diversity (not significant on quake, but significant on other 4 datasets). It fits the intuition that motivates the weight schedule design in Section 3.2. We will add the ablation study to the experiments.",
" Thanks for your constructive feedback! We believe that addressing this feedback will make our paper significantly stronger. The detailed response to each question is as follows,\n\n### W1. A brief discussion on diversity definition.\nWe will add a brief discussion on the definition of diversity in Section 3.1 as follows, \n\nThe existing diversity measures can be generally divided into pair-wise and nonpair-wise measures. The nonpair-wise diversity [1] directly measures a set of learners in the ensemble. While the ensemble may change during each iteration, it’s difficult to model the diversity of an ensemble with a candidate configuration and multiple learners. Therefore, we use the pair-wise measures to simply learn the diversity of two given configurations. To this end, we follow the definition in previous research [2] that explicitly improves the diversity of neural networks and also shows satisfactory empirical results. \n\n[1] Z.-H. Zhou, Ensemble Methods: Foundations and Algorithms. Boca Raton, FL, USA: CRC, 2012.\n\n[2] W. Zhang, J. Jiang, Y. Shao, and B. Cui. Efficient diversity-driven ensemble for deep neural networks. In 2020 IEEE 36th International Conference on Data Engineering (ICDE), pages 73–84. IEEE, 2020.\n\n### W2. The effect of diversity in ensemble selection. \nWe agree that it's quite difficult to explain whether ensemble selection prefers high or low diversity during each selection round. But considering an extreme case where all the learners are exactly the same (with no diversity), there’s no gain when using ensemble selection. This at least indicates that building a good ensemble using ensemble selection still requires diversity. To show that suggesting diverse configurations during the search process improves the final ensemble, we add additional experiments. Concretely, 1) we **analyze the update frequency of the temporary pool** to show whether suggesting diverse configurations will improve the current ensemble, and 2) we **analyze the final ensemble based on another diversity metric 'disagreement'**. Please refer to 'W4. Diversity analysis' for setups, results, and analysis.\n\n### W3. Significance test.\nTo check whether the improvement of DivBO is statistically significant, as suggested by the reviewer, we conduct the Wilcoxon signed-rank test on each dataset given a pair of methods. The difference is significant when the value $p\\leq0.05$ \\[1\\]. We count the number of datasets if 1) DivBO is better than the other method, and the difference is statistically significant (**B**); 2) the difference is not statistically significant (**S**); and 3) DivBO is worse, and the difference is statistically significant (**W**). For each dataset, the rank of DivBO is 1 for **B**, 1.5 for **S**, and 2 for **W**. We compute the pair-wise rank by averaging the rank on 15 datasets. The results are presented as follows,\n\n**DivBO vs. RB-ES.** The pair-wise ranks of DivBO and RB-ES on all datasets are 1.33 and 1.67, respectively. We agree that RB-ES is a strong baseline. Through the significance test, we observe that DivBO performs no worse than RB-ES on 12 datasets and better on 8 datasets.\n\n\n| | B | S | W |\n| - | :-: | :-: | :-: |\n| DivBO | 8 | 4 | 3 |\n\n**DivBO vs. BO-ES.** The pair-wise ranks of DivBO and BO-ES on all datasets are 1.17 and 1.83, respectively. While the DivBO framework is extended from BO, DivBO generally performs better than BO. Concretely, DivBO performs no worse than BO-ES on 13 datasets and better on 12 datasets.\n\n\n| | B | S | W |\n| - | :-: | :-: | :-: |\n| DivBO | 12 | 1 | 2 |\n\n\n**DivBO vs. RS-ES.** The pair-wise ranks of DivBO and RS-ES on all datasets are 1.07 and 1.93, respectively. DivBO performs no worse than RS-ES on all datasets.\n\n\n| | B | S | W |\n| - | :-: | :-: | :-: |\n| DivBO | 13 | 2 | 0 |\n\n\n\\[1\\] Wilcoxon, Frank. Individual comparisons by ranking methods. Breakthroughs in statistics. Springer, New York, NY, 1992. 196-202.\n\n",
" Thanks for your constructive feedback! We believe that addressing this feedback will make our paper significantly stronger. The detailed response to each question is as follows,\n\n### W1. Significance test.\nTo check whether the improvement of DivBO is statistically significant, we conduct the Wilcoxon signed-rank test on each dataset given two methods. The difference is significant when the value $p\\leq0.05$ \\[1\\]. We count the number of datasets if 1) DivBO is better than the other method, and the difference is statistically significant (**B**); 2) the difference is not statistically significant (**S**); and 3) DivBO is worse, and the difference is statistically significant (**W**). For each dataset, the rank of DivBO is 1 for **B**, 1.5 for **S**, and 2 for **W**. We compute the pair-wise rank by averaging the rank on 15 datasets. The results are presented as follows,\n\n**DivBO vs. RB-ES.** The pair-wise ranks of DivBO and RB-ES on all datasets are 1.33 and 1.67, respectively. We agree that RB-ES is a strong baseline. Through the significance test, we observe that DivBO performs no worse than RB-ES on 12 datasets and better on 8 datasets.\n\n\n| | B | S | W |\n| - | :-: | :-: | :-: |\n| DivBO | 8 | 4 | 3 |\n\n**DivBO vs. BO-ES.** The pair-wise ranks of DivBO and BO-ES on all datasets are 1.17 and 1.83, respectively. While the DivBO framework is extended from BO, DivBO generally performs better than BO. Concretely, DivBO performs no worse than BO-ES on 13 datasets and better on 12 datasets.\n\n\n| | B | S | W |\n| - | :-: | :-: | :-: |\n| DivBO | 12 | 1 | 2 |\n\n**DivBO vs. RS-ES.** The pair-wise ranks of DivBO and RS-ES on all datasets are 1.07 and 1.93, respectively. DivBO performs no worse than RS-ES on all datasets.\n\n\n| | B | S | W |\n| - | :-: | :-: | :-: |\n| DivBO | 13 | 2 | 0 |\n\n\n\\[1\\] Wilcoxon, Frank. Individual comparisons by ranking methods. Breakthroughs in statistics. Springer, New York, NY, 1992. 196-202.\n\n### Q1. Test results.\nWe update the test ranking of four methods on 15 datasets as follows (Each column refers to the mean test rank of the ensemble at that iteration), \n\n| | 25 | 50 | 75 | 100 | 125 | 150 | 175 | 200 | 225 | 250 |\n| - | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\n| RS-ES | 2.67 | 3.30 | 3.20 | 3.33 | 3.33 | 3.40 | 3.07 | 3.23 | 3.10 | 3.10 |\n| BO-ES | 2.53 | 2.37 | 2.77 | 2.67 | 2.73 | 2.60 | 2.93 | 2.87 | 2.80 | 2.80 |\n| RB-ES | *2.40* | 2.20 | 2.10 | 2.13 | 2.13 | 2.30 | **1.97** | 2.13 | 2.10 | 2.27 |\n| DivBO | *2.40* | **2.13** | **1.93** | **1.87** | **1.80** | **1.70** | 2.03 | **1.77** | **2.00** | **1.83** |\n\nWe observe that DivBO generally achieves the lowest test ensemble error. However, while some datasets are relatively small, building an ensemble on validation set may suffer from a risk of overfitting. At the 175-th iteration, RB-ES performs better than DivBO. To verify whether the improvement at the 250-th iteration is statistically significant, we add a pair-wise significance test (see W1). We will also add the risk of overfitting in the limitation (see 'L1. Limitations' below).\n\n### Q2. Settings for sampling candidate configurations.\nThe configurations to compute the acquisition function are sampled during each iteration. In other words, we need to sample candidate configurations if we want to suggest a new configuration. We sample 1950 configurations randomly from the entire search space and 50 configurations by randomly altering one hyperparameter in the optimal observed configuration. This strategy is also applied in auto-sklearn and VolcanoML. The description is also provided in Appendix A.3.\n\n### Q3, Q4 Typos.\nThanks. We will correct the typos.\n\n\n### L1. Limitations.\nTo clarify the limitation, we will add a paragraph in Section 4.4. The updated limitations will be as follows,\n\nLimitation. a) The use of ensemble leads to higher inference latency than using the single best learner (approximately K times where K is the number of learners in the ensemble). This latency can be reduced with the aid of parallel computing if we have sufficient computational resources. In addition, as ensemble selection is fitted on the validation set, there's a risk of overfitting on the test set for small datasets. b) DivBO using Equation 3 as the diversity function can not directly support algorithms that only predict class labels (e.g., SVC). Though DivBO still works by converting the predicted labels to class probability (like [1, 0, …]), other diversity functions can be developed to support those algorithms better.",
" This works present a method for hyperparameter optimization applied to ensemble construction. The proposed method builds a surrogate function for diversity of pairs of classifiers that is then combined with a predictive performance objective and maximized. Experiments show that the method outperforms other ensemble building approaches in the literature. Strengths:\n- The contributions are incremental, inscribed in the already existing framework of ensemble learning through hyperparameter optimization\n- The paper is well written and structured\n- The quality of the experiments is good\n\nWeaknesses:\n- The improvement in performance in comparison with the second best method RB-ES is hinting towards a plateau in performance in terms of ensembling\n The paper and method description are clear and self-contained. The question is are the contributions significant enough to be published at Neurips. The performance improvement is small between DiVBO and RB-ES but the experiment compares a wide number of approaches and showcases RB-ES itself I believe for the first time. Find some questions/suggestions below. \n\n- I don't think showing the ranking of validation errors on Figure 3 is very informative, especially given the small size of some of the datasets -- overfitting is likely happening. You should show the ranking of testing error and push back the validation plot to the appendix. Same thing goes for Figure 4 (or maybe reduce number of methods displayed, prune the worse performing methods such as EO, NES and RS-ES and show both validation and testing errors). \n\n- lines 177-179: are the configurations sampled once per objective function? how many points are sampled?\n\n- line 235: we slightly modifies -> modify\n\n- I think section titles 4.2 and 4.3 should be evaluation OF DivBO/Diversity surrogate and not evaluation on? There is a limited discussion of limitations of the method. There are no potential negative societal impacts warranting a discussion.\n",
" This paper focuses on the problem of CASH (Combined Algorithm Selection and Hyperparameter Optimization) to automatically configure machine learning models (or pipelines), and seeks to improve the quality of the final ensemble generated by various AutoML solutions using the model/pipelines configurations tried during the optimization. Based on the intuition that diversity among the base models improves the quality of any ensemble, this paper seeks to modify a CASH solver to generate a more diverse set of base models, while still maintaining the predictive performance of the base models. To this end, the paper presents a notion of diversity between any two pair of model configurations, and shows how it can be incorporated into a Bayesian Optimization framework (BO) by proposing a new acquisition function that combines the predictive performance and diversity in a way that the predictive performance of selected models is maintained while improving the diversity among the base models. Empirically, the paper demonstrates the ability of the proposed DivBO scheme to generate diverse set of base models (based on their proposed definition of diversity) while solving the CASH problem. The empirical results also show that ensembles created by DivBO improve upon ensembles created by diversity agnostic CASH solvers.\n\n ## Strengths\n\n- The need for diversity in ensembles have been considered for a long time. It is great to see a CASH solver like DivBO that directly incorporates the desired diversity in the final ensemble within the CASH optimization instead of keeping the optimization and the ensembling completely separate as with most existing AutoML solutions that use CASH solvers. This is strong novel contribution.\n\n- Given that diversity and predictive performance is not always aligned, the authors do a great job at clearly highlighting the challenges of incorporating diversity in a CASH solution and the intuitions guiding their proposed scheme. It is true that (i) there is no need to have high diversity from base models that do not perform well (and hence wont probably be part of the final ensemble), and (ii) high diversity should not be at the cost of predictive performance, since that form of diversity will not be useful for the final ensemble. To this end, the authors define an acquisition function for diversity that only relies on diversity from a pool of potential ensemble members, and is able to be incorporated with the acquisition function for the predictive performance with use of ranks instead of raw values.\n\n- Another original strong idea in this paper is the way the predictive performance rank and the diversity rank are weighed in the final acquisition function (equation (5)). The use of a weight schedule is well motivated by the authors with the intuition that initially we want the CASH solver to focus on predictive performance (which allows us to get a strong pool of potential ensemble base learners), and gradually push the optimization to increase diversity without hurting the predictive performance too much.\n\n\n- The authors provide a very concise but appropriate literature review.\n\n\n## Weaknesses\n\n- While it has been known that diversity in the base models of the ensemble can improve generalization, one key component is the definition of diversity. There is not much discussion on why the definition of diversity in equation (3) leads to improved ensemble performance (beyond the presented empirical results). As a reader, I would like a better motivation and understanding for this diversity metric since it is a critical part of this paper.\n\n- Another point that is not clear is that, given that we are performing ensemble selection using the greedy selection scheme of Caruana et al (2004), would diversity play any role at all in the final ensemble selection? Given that we greedily select the next ensemble member which most improves the validation loss, what are conditions under which the increased diversity in base learners would lead to selection of the more diverse base learner compared to less diverse ones?\n\n- From the results in Figure 3, it is not clear that DivBO is significantly better than RB-ES and BO-ES. Firstly, with so many baselines, it is more useful to consider pairwise comparisons (as seen in citation [24] and [A]) especially between closely performing baselines. Secondly, comparing the average rank without confidence intervals (or better, some form of Wilcoxon signed rank test of significance), it is not clear if DivBO improves over BO-ES and other baselines significantly because of the incorporated diversity. In fact, in Figure 4 and Table 3, the average performance improvement of DivBO against baselines is well within the predictive performance confidence intervals posted, undermining the expected improved generalization ability of an ensemble with diverse base models.\n\n\n\n[A] Rakotoarison, Herilalaina, Marc Schoenauer, and Michèle Sebag. \"Automated Machine Learning with Monte-Carlo Tree Search.\" IJCAI-19-28th International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2019.\n\n\n- Figure 5 (and also Figure 1) shows how different the next suggested configuration is to the ones in the current pool $\\mathcal{P}$. Hence this highlights the ability of the combined acquisition function and its maximization (probably after the weight saturation) to propose diverse candidate configurations. However, it is not clear (i) how good is the predictive performance of these diverse configuration, (ii) how this improves the predictive performance of the ensemble. The authors also present the average validation performance of the base models in the ensemble pool in Figure 5, highlighting that the base models' performances do not drop because of the increased diversity, but it does not say anything about improving the ensemble performance -- it is quite possible that DivBO proposes diverse configurations but they are not included in the ensemble, in which case, neither the ensemble performance nor the average base model performance would be affected by the diversity. This figure does not seem to make it clear that the increased diversity is playing a significant role in improving the ensemble quality.\n\n - While DivBO- is considered in the experiments, are there results (empirical or theoretical) highlighting the difference between DivBO and DivBO without the weight schedule in equation (5); that is, the weight is fixed to $\\beta$ ($w = \\beta$)? While it seems intuitive that DivBO with the weight schedule would perform better, it would be good to have empirical evidence.\n\n- The paper spends a lot of space on how the surrogate model for the diversity metric is created and how it generalizes. This is useful detail but lot of it appears to be straightforward given observations of the metric much like most surrogate modeling. It would great to understand what were the challenges in the surrogate modeling of the diversity which motivated the authors to describe and study the surrogate modeling of the diversity metric in such detail, instead of studying, for example, the effect of the different choices in DivBO (such as effect of $\\beta$ and $\\tau$ in the weight schedule in the main paper, or the number of samples for the combined acquisition function optimization).\n\n- The definition of the diversity acquisition function is not entirely clear. It appears from the definition and the following text that, in one trial, for each of the $\\theta \\in \\mathcal{P}$, we sample from $M_{div}(\\theta, x)$ and then take a minimum over all $\\theta \\in \\mathcal{P}$, and take an average over $N$ such trials. Can you please explain why we are using a mean over minimum-of-per-theta samples and not just minimum of per-theta-sample-means? This choice seems a bit unintuitive and I would like to understand if this plays a significant role in DivBO, and if so, why?\n\n\n- Is BO exactly the same as DivBO- where $w=0$ throughout the optimization (or equivalently BO-ES and DivBO)? Otherwise different performance surrogate function and/or acquisition function maximization strategies will lead to differences and it won't be clear if the difference between BO and DivBO is because of the diversity incorporation or just because of other differences in the BO configurations.\n\n- In the evaluation in section 4.2, the correlation between the true diversity and predicted diversity with $M_{div}$ appears positive but still is only around 0.5 even with 250 configurations implying $250^2$ observations to fit the $M_{div}$ surrogate model, which seems to be a large number of observations (albeit non-i.i.d. ones). Is there an explanation for why the $M_{div}$ quality is not much higher? What are the potential challenges?\n\n- In the definition of diversity in equation (3), the predicted class probability $T_{x_i}(s)$ for some configuration $x_i$ and validation sample $s$ require that the trained model generates probabilities. How are the class probabilities generated from discriminative models such as SVC (linear or kernel) which do not inherently produce class probabilities but rather directly output class labels?\n\n\n- In Figure 6, why do the different schemes have different number of base learners? It is weird to compare different baselines with different number of ensembles.\n The authors claim in the Checklist that they discuss the limitations of the proposed scheme in Figure 3 and Section 4.3, it is not clear from the figure or the subsection what limitations of DivBO they have discussed. \n",
" In this work, the authors aim to improve CASH solutions by favoring diverse predictors during the optimization which are more appropriate during post-hoc ensembling. In order to achieve this, the authors adapt the acquisition function in the Bayesian optimization by a diversity term. Therefore, selected candidates which are a trade-off between predictive performance and a diverse addition to the current ensemble. The authors compare their method against standard CASH methods, automated ensemble learning methods and AutoML methods + posthoc ensembling on 15 datasets. **Summary**\n\nThe idea is reasonable, is of academic relevance and to some degree relevant for practitioners. The method is clearly described. The empirical evaluation raises some concerns regarding the usefulness of the proposed method. While the method usually ranks best, the improvement over a random search with post-hoc ensembling does not seem to be statistical significant and no significance test is provided to show otherwise. Further ablation studies could have been conducted to provided deeper understanding about the importance of diverse models and the hyperparameter sensitivity.\n\n**Details**\n\nIt is common practice to select upfront a diverse set of algorithms, train a couple of them with different hyperparameters and use a weighted average across them. This raises the question how the proposed method compares against this simple baseline. A method in the literature that makes use of this fact is [1]: They decompose the CASH problem in K HPO problems, solve those and then use post-hoc ensembling. This will also result in diverse, well-performing models, mimics the human strategy but does not explicitly consider the search for diverse candidates given an ensemble. It would be interesting to understand whether this is important and whether the proposed method actually makes efficient use of it. At the moment we only know that the considered baseline methods are skewed towards better performing algorithms which reduces diversity. The solution in the paper mentioned above seems to be a very simple solution to overcome this problem.\n\nFigure 6 is great since it demonstrates that DivBO is indeed finding ensembles with higher diversity. However, this is not particularly surprising given that the pool of candidates for the post-hoc ensembling is more diverse from the beginning. I think it would be more interesting to understand the importance of the diverse models for the ensemble. This could be done by using a greedy post-hoc ensembling approach and report the order of models or report the performance of the ensembling when removing the most diverse models. Optimally, those results are reported across all datasets and not a single one.\n\nWhile the test error in Table 3 of DivBO is oftentimes the lowest, the improvement over BO-ES or even RS-ES is small and given the high standard deviation, I expect the improvement to not be statistically significant. It would be great if the authors could provide a significance test.\n\nAddition of two new hyperparameters. The provided sensitivity analysis on a single dataset is insufficient to judge how drastically the optimal setting changes with different datasets. The authors mention that the sensitivity analysis was the motivation for the final choice of the hyperparameter settings. However, the optimal solution is at the edge of the grid. How does the grid look like for larger or smaller taus and larger betas? Getting an idea about the sensitivity based on only one dataset is hard. It would be important to understand whether the best settings are significantly different across datasets.\n\n\nThe use of an ensemble in the first place is a limitation which is not discussed. In practice, ensembles may be well-performing but they are typically not used since they are very hard to understand, have higher inference time and are harder to maintain.\n\n**References**\n\n[1] Martin Wistuba, Nicolas Schilling, Lars Schmidt-Thieme: Automatic Frankensteining: Creating Complex Ensembles Autonomously. SDM 2017: 741-749\n Is the improvement statistically significant?\n\nHow important are the most diverse models in the final ensemble? Discussion on problems of ensemble models could be added.\n\nMaybe the authors can be more precise about where they describe their limitations. The section referred to in the checklist does not seem to discuss limitations.",
" This work proposed a method to improve the ensemble performance under the CASH framework. The basis of the method is based on observations by many previous work that a diverse set of base learners will usually improve ensemble performance. The method jointly searches for configurations not only with a good objective but also diverse ones compared to the configurations that have been evaluated.\n\nTo achieve that, the paper proposed a surrogate to measure diversity between a pair of configurations and combine the acquisitions for quality and diversity through a weighted ranking. Experiments demonstrated improved performance compared to well thoughted baselines.\n The strengths of the paper include:\n- A novel idea to consider both diversity and performance during CASH. \n- Very good presentation quality with clear structure and detail.\n- Informative experiments result to support the claims.\n\nThe weakness of the paper include:\n- Since this is an empirical paper, I would expect evaluations on more datasets, especially given all the datasets are less than 20k rows. For example the [OpenML benchmark suites](https://openreview.net/forum?id=OCrD8ycKjG).\n\nNice to have:\n- There is a strong baseline called [AutoGluon Tabular](https://github.com/awslabs/autogluon), which reports competitive empirical results by training with default HPs and then doing multi-layer stacking. It would be nice to know how the proposed method compares to it.\n\n==== After reading the author's response ===\n\nThe authors addressed all my questions and comments, I will increase the score to 7. - I am not sure about “... while ensuring the performance of base learners” in line 328. Is the average performance a good measure to claim that? As we have seen in Figure 3 and the author also wrote in line 286, “We also find that DivBO- performs worse than BO without ensemble learning”. So the best base learners found by DivBO are clearly worse than others.\n- Minor: In Figure 5, “-d” and “-p” are not explained.\n - I would suggest the authors highlight the potential overfitting issue, which has been observed in their experiments.\n- Also, the ensemble can lead to slow inference and one should still consider a single model if inference time is critical.\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"l04QgmfZ4X3",
"3b50w2ZTmPQ",
"M5NO6EV9AsM",
"re3QDKRZGU",
"3FXOy6Al1b",
"vCL0p_5NGJw",
"cK5D5rKp9ZH",
"s1A-mARzfO8",
"NdCLAPKMxW5",
"N_d3oCuQXkC",
"RcP-nahbnPj",
"An7NtAp-KSi",
"KRMaMlyr-m0",
"xGsbS6XKhBl",
"Dg_A3p0_NDx",
"nips_2022_sFQJ0IOkHF",
"nips_2022_sFQJ0IOkHF",
"nips_2022_sFQJ0IOkHF",
"nips_2022_sFQJ0IOkHF"
] |
nips_2022_FNzLe2-ppRO | TREC: Transient Redundancy Elimination-based Convolution | The intensive computations in convolutional neural networks (CNNs) pose challenges for resource-constrained devices; eliminating redundant computations from convolution is essential. This paper gives a principled method to detect and avoid transient redundancy, a type of redundancy existing in input data or activation maps and hence changing across inferences. By introducing a new form of convolution (TREC), this new method makes transient redundancy detection and avoidance an inherent part of the CNN architecture, and the determination of the best configurations for redundancy elimination part of CNN backward propagation. We provide a rigorous proof of the robustness and convergence of TREC-equipped CNNs. TREC removes over 96% computations and achieves 3.51x average speedups on microcontrollers with minimal (about 0.7%) accuracy loss. | Accept | This work proposes a redundancy pruning mechanism for convolutional neural networks to optimize their performance for extremely resource-constrained edge devices, such as microcontrollers.
The work is based on a novel gradient-optimized locality sensitive hashing approach that removes a lot of the indeterminacy of previous approaches. This method can achieve reliable high performance while improving the network inference latency significantly (eg 4x in some circumstance)
I think that the alone the idea of tuning LSH via SGD directly for clustering purposes is a good one and is of interest to the wider community. | train | [
"jFr1JvQh4K",
"K7QpD9LNERo",
"Zmn-3rlzKo",
"Tvco_5o5H2X",
"SkPpnLxxwox",
"oah_EQztILA",
"Zfu5kgabGW",
"F2oQgjf50EW"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" # NeurIPS 2022 author response\n\nWe thank the reviewers for insightful comments. We focus on answering the following concerns from reviewers:\n## Motivation for TREC design\nAs mentioned in the Introduction Section, in addition to designing a differentiable implementation of LSH clustering, TREC's contribution lies in proposing **a principled way to detect and avoid transient redundancy** (line 39). Deep Reuse is an ad-hoc approach to eliminating transient redundancy **during inferences** (line 27-30). It is an acceleration tool outside the DNN pipeline. In its use, data must be explicitly pulled out of the DNN pipeline, go through an LSH-based online clustering and matrix multiplication, and then get the results returned back to the DNN pipeline. On using LSH clustering, besides the large fluctuations in accuracy due to the randomly generated hashing vectors (line 30-31), Deep Reuse actually changes the inference process (line 37-38). Therefore, the weights of the original network may no longer best suit the inference.\n\nThe difference brought about by TREC is that it incorporates the LSH clustering into the convolutional layer to construct a new convolution operator (line 39-41). This operator, through backpropagation-based training, learns the most effective hash vectors for eliminating redundancy and maintaining accuracy (line 47-51). **In this way, the weights of the network and the hashing vectors get updated hand in hand throughout the training, yielding the much better performance and stability for inference than Deep Reuse does.** \n## Larger datasets\nOur experiment was on Microcontrollers while the previous Deep Reuse studies were on servers and laptops. Rather than gigabytes of memory, the Microcontrollers have only 2MB memory. The stringent resource on Microcontrollers stresses the importance of more effective redundancy removal, and also limits the size of the workload that can fit into it. Nevertheless, we managed to add an experiment of running the ***ResNet-34*** on ***ImageNet*** (64x64 since 224x224 and 256x256 run out of the MCU memory). By removing 96.3% of transient redundancy, TREC achieves an average single-layer speedup of 4.64x. When applied to the full ResNet, TREC achieves a speedup of 3.08x with 0.4% accuracy loss. The experimental results are presented in Section 5.3 and Appendix C, D.\n## Accuracy of Deep Reuse\nFrom the authors of Deep Reuse, we learned that the results reported in their paper were the best they picked after many trials on the Cifar dataset, including an extensive tuning of the hyperparameters (e.g., learning rates) and trials of many randomly generated hash vectors. The first author of the Deep Reuse paper did the experiments several years ago and couldn't find the hyperparameters anymore; the results were not reproducible now. The results we included in the paper were the accuracy range we observed in 150 trials of Deep Reuse.\n## Modifications to the paper\n1. An experiment of running ResNet-34 on ImageNet (64x64). See Section 5 and Appendix C, D.\n2. A verification of practical applications satisfying the sufficient direction assumption. See Appendix B.\n3. An ablation analysis of the impact of batch size on single-layer performance. See Appendix C.2.\n",
" # NeurIPS 2022 author response\nWe thank the reviewers for insightful comments. We focus on answering the following concerns from reviewers:\n## Motivation for TREC design\nThe motivation of TREC design is to optimize the inference performance on edge devices, and the LSH-based clustering it uses is inspired by Deep Reuse (line 39). Deep Reuse is an ad-hoc approach to eliminating transient redundancy during inferences (line 27-30). It is an acceleration tool outside the DNN pipeline. In its use, data must be explicitly pulled out of the DNN pipeline, go through an LSH-based online clustering and matrix multiplication, and then get the results returned back to the DNN pipeline. The LSH-based online clustering step is essential for effectively finding and avoiding redundancy in activation maps. It's a hashing vector based approach. As a stand-alone tool, Deep Reuse has a generic design, using randomly generated vectors as the hashing vectors in LSH. The randomness leads to large fluctuations in accuracy (more than 5%) across runs, meaning that the method may yield significantly different performance on one dataset in two different runs (line 30-31). It is detrimental to the reliability of ML models. Typically, nondeterministic methods are not preferred for real-world applications for their results are hard to reproduce and depend on. In addition, because Deep Reuse changes the inference process, the weights of the original network may no longer best suit the inference. \n\nThe difference brought about by TREC is that it incorporates the LSH clustering into the convolutional layer to construct a new convolution operator (line 39-41). This operator, through backpropagation-based training, learns the most effective hash vectors for eliminating redundancy and maintaining accuracy (line 47-51). In this way, the weights of the network and the hashing vectors get updated hand in hand throughout the training, yielding the much better performance and stability for inference than Deep Reuse does. As the key difference between TREC and Deep Reuse is in the enabled backpropagation-based training of the LSH, the comparison presented in the paper between Deep Reuse and TREC is equivalent to an ablation analysis of the backpropagation step; our experimental results confirm its effectiveness.\n## Definition of transient redundancy\nTransient redundancy arises from the activation maps and inputs which changes across inferences (line 21-24). It needs to be detected through online methods, and can be intuitively seen from the existence of similar patches within an image or activation map. Our results in the paper show that the redundancy can be as much as 96% (line 298). \n## Explanation of “eliminate 96% of the transient redundancy” \nSorry for not clarifying the definition of redundancy ratio. When we say that 96.22% of the transient redundancy is eliminated, we mean the LSH clustering has identified such an amount of similar vectors. It is expressed as the redundancy ratio ( $r_t$ ) in Section 4.3 to measure the size reduction of the input matrix (line 266-267). It is defined as $r_t=1-N_c/N$, where $N_c$ is the number of clusters and $N$ is the original number of rows (or neuron vectors) in the input matrix. Given such a definition, 100% transient ratio is unattainable, as it implies an empty centroid matrix. In addition, LSH, as an approximate clustering method, is inherently unable to achieve absolute accuracy and eliminate all redundancy. \n## CNN choices\nOur experiment was on Microcontrollers while the previous Deep Reuse studies were on servers and laptops. Rather than gigabytes of memory, the Microcontrollers have only 2MB memory. The stringent resource on Microcontrollers stresses the importance of more effective redundancy removal, and also limits the size of the workload and network that can fit into it. Nevertheless, we managed to add an experiment of running the ResNet-34 on ImageNet (64x64 since 224x224 and 256x256 run out of the MCU memory). By removing 96.3% of transient redundancy, TREC achieves an average single-layer speedup of 4.64x. When applied to the full ResNet, TREC achieves a speedup of 3.08x with 0.4% accuracy loss. The results are presented in Section 5.3 and Appendix C, D.\n## Accommodation to BN layers\nAfter TREC clusters the neuron vectors and computes the multiplication results of the reduced matrix (centroid matrix) and weight matrix, it uses the results to create a result matrix of the output dimensions of the original convolutional layer (line 81-82). By keeping the result shape unchanged, it ensures no changes needed on any subsequent network layers (including the BN layers).\n## Modifications to the paper\n1. An experiment of running ResNet-34 on ImageNet (64x64). See Section 5 and Appendix C, D.\n2. A verification of practical applications satisfying the sufficient direction assumption. See Appendix B.\n3. An ablation analysis of the impact of batch size on single-layer performance. See Appendix C.2.\n",
" # NeurIPS 2022 author response\nWe thank the reviewers for insightful comments. We focus on answering the following concerns from reviewers:\n## Motivation for backpropagation\nThe motivation of TREC design is to optimize the inference performance of the model on edge devices, and the LSH-based clustering it uses is inspired by **Deep Reuse** (line 39). Deep Reuse is an ad-hoc approach to eliminating transient redundancy **during inferences** (line 27-30). It is an acceleration tool outside the DNN pipeline. In its use, data must be explicitly pulled out of the DNN pipeline, go through an LSH-based online clustering and matrix multiplication, and then get the results returned back to the DNN pipeline. The LSH-based online clustering step is essential for effectively finding and avoiding redundancy in activation maps. It's a hashing vector based approach. As a stand-alone tool, Deep Reuse has a generic design, using ***randomly generated vectors*** as the hashing vectors in LSH. The randomness leads to large fluctuations in accuracy (more than 5%) across runs, meaning that the method may yield significantly different performance on one dataset in two different runs (line 30-31). It is detrimental to the reliability of ML models. Typically, nondeterministic methods are not preferred for real-world applications for their results are hard to reproduce and depend on. In addition, because Deep Reuse changes the inference process, the weights of the original network may no longer best suit the inference. \n\nThe difference brought about by TREC is that it incorporates the LSH clustering into the convolutional layer to construct a new convolution operator (line 39-41). **This operator, through backpropagation-based training, learns the most effective hash vectors for eliminating redundancy and maintaining accuracy (line 47-51).** In this way, the weights of the network and the hashing vectors get updated hand in hand throughout the training, yielding the much better performance and stability for inference than Deep Reuse does. **As the key difference between TREC and Deep Reuse is in the enabled backpropagation-based training of the LSH, the comparison presented in the paper between Deep Reuse and TREC is equivalent to an ablation analysis of the backpropagation step;** our experimental results confirm its effectiveness.\n## Larger datasets\nOur experiment was on Microcontrollers while the previous Deep Reuse studies were on servers and laptops. Rather than gigabytes of memory, the Microcontrollers have only 2MB memory. The stringent resource on Microcontrollers stresses the importance of more effective redundancy removal, and also limits the size of the workload that can fit into it. Nevertheless, we managed to add an experiment of running the ResNet-34 on ImageNet (64x64 since 224x224 and 256x256 run out of the MCU memory). By removing 96.3% of transient redundancy, TREC achieves an average single-layer speedup of 4.64x. When applied to the full ResNet, TREC achieves a speedup of 3.08x with 0.4% accuracy loss. The experimental results are presented in Section 5.3 and Appendix C.\n## Theoretical bounds\nAccording to Eq. 4.9 and 4.11 in the paper, as long as the practical application satisfies Assumptions 1 and 2, we can obtain a theoretical upper bound of the sum of squared gradients without considering the limit. The relevance of this theoretical upper bound to practical applications is reflected in the values of the constants such as $L$, $\\mu$, $M$, and the specific setting of the learning rate. From Eqs. 4.5-4.8, the constants $L$, $M$, and $M_G = M_v + \\mu_G^2$, which are greater than 0, exist according to the L2-norm. And through experimental verification, we find that the constant $\\mu$ is always greater than 0 during the training process and gradually converges to 1 with time, satisfying the sufficient direction assumption. We add a figure in Appendix B to show the trend of the constant $\\mu$.\n## Answers to minor issues\n### Detailed single-layer performance\nBoth single-layer and end-to-end performance are averages obtained by inference with a batch size of 1 on the MCU, since large batch sizes can exhaust MCU memory. We will clarify the setting in Section 5. Also, we will illustrate the impact of batch size on performance in Appendix C.2 by measuring the redundancy ratio on the server. Actual numbers for single layer performance see link: https://anonymous.4open.science/r/TREC_Single_Layer-AC65.\n### The need for edge inference acceleration\nWith the rise of cloud computing and the AIoT, microcontrollers have dominated the market due to their low power consumption. It is estimated that 250 billion microcontrollers are already in use, and their volumes are expected to reach 1 trillion in various industries by 2035. Therefore, the computing demand of MCUs will last for a long period of time. Furthermore, since TREC is not hardware dependent, we consider applying TREC to other specialized hardware as a future study.\n",
" # NeurIPS 2022 author response\n\nWe thank the reviewers for insightful comments. We focus on answering the following concerns from reviewers:\n## Difference between TREC and Deep Reuse\nDeep Reuse is an ad-hoc approach to eliminating transient redundancy during inferences (*line 27-31*). It is an acceleration tool outside the DNN pipeline. In its use, data must be explicitly pulled out of the DNN pipeline, go through an LSH-based online clustering and matrix multiplication, and then get the results returned back to the DNN pipeline. The LSH-based online clustering step is essential for effectively finding and avoiding redundancy in activation maps. It's a hashing vector based approach. As a stand-alone tool, Deep Reuse has **a generic design**, using ***randomly generated vectors*** as the hashing vectors in LSH. The randomness leads to large fluctuations in accuracy (more than 5%) across runs, meaning that the method may yield significantly different performance on one dataset in two different runs. It is detrimental to the reliability of ML models. Typically, nondeterministic methods are not preferred for real-world applications for their results are hard to reproduce and depend on. In addition, because Deep Reuse changes the inference process, the weights of the original network may no longer best suit the inference. \n\nThe difference brought about by TREC is that it incorporates the LSH clustering into the convolutional layer to construct a new convolution operator (*line 39-41*). This operator, through **backpropagation-based training**, learns ***the most effective hash vectors*** for eliminating redundancy and maintaining accuracy (*line 47-51*). In this way, the weights of the network and the hashing vectors get updated hand in hand throughout the training, yielding the much better performance and stability for inference than Deep Reuse does. As the key difference between TREC and Deep Reuse is in the enabled backpropagation-based training of the LSH, the comparison presented in the paper between Deep Reuse and TREC is equivalent to an ablation analysis of the backpropagation step; our experimental results confirm its effectiveness.\n\n## Definition of lasting redundancy and transient redundancy\nWe coined the two terms to stress the differences of the two kinds of redundancies in DNN, which are essential for understanding the needs for different approaches. We believe this classification is intuitive. Lasting redundancy arises from model parameters which usually stay stable across inferences (*line 18-21*), while transient redundancy arises from the activation maps and inputs which changes across inferences (*line 21-24*). Because of the different natures, redundancy in the former can be detected through offline compression methods, but the latter needs to be detected through online methods during inferences. Transient redundancy can be intuitively seen from the existence of similar patches within an image or activation map. Our results in the paper show that the redundancy can be as much as 96% (*line 298*).\n## Explanation of “eliminate 96% of the transient redundancy”\nWhen we say that 96.22% of the transient redundancy is eliminated, we mean the LSH clustering has identified such an amount of similar vectors. It is expressed as the redundancy ratio ( $r_t$ ) in Section 4.3 to measure the size reduction of the input matrix (*line 266-267*). It is defined as $r_t=1-N_c/N$, where $N_c$ is the number of clusters and $N$ is the original number of rows (or neuron vectors) in the input matrix. If the reduction of transient redundancy is to be defined in terms of ***FLOPs***, given that the dimensions of the input and weight matrices unfolded by Im2col are $N \\times K$ and $K \\times M$, the redundant computation reduced by the matmul step is $r_t \\cdot N \\cdot K \\cdot M$. \n\n## Answers to minor issues\n### Actual speedups\nWith the sublinear performance of LSH, TREC achieves an average speedup of 4.4x compared to conventional convolution operator (*Section 5.2 and Appendix C*). When applied to full networks, TREC achieves an average speedup of 3.51x (*Section 5.3*).\n### Channel pruning\nChannel pruning is used only on ZfNet so that it can fit into the Microcontroller (*line 291*). In our speedup comparisons, the baseline and our TREC performance are on the same pruned ZfNet; in another word, the speedups come from the more effective redundancy removal by TREC, not channel pruning. \n### Code Release\nYes, we plan to release the code along with the models and datasets. \n## Modifications to the paper\n1. An experiment of running ResNet-34 on ImageNet (64x64, since 224x224 and 256x256 run out of the MCU memory). See *Section 5* and *Appendix C, D*.\n2. A verification of practical applications satisfying the sufficient direction assumption. See *Appendix B*.\n3. An ablation analysis of the impact of batch size on single-layer performance. See *Appendix C.2*.",
" The paper proposes a method to reduce the number of operations in a CNN by reducing the input to subsequent layers via the clustering of (intermediate) input features/pixels. The authors claim to incorporate the method to be more inherently combined with network architecture coining it, transient redundancy convolution. In particular, the method clusters the rows (convolutions) and replaces the original input with clustered centroids. The paper addresses the issues of how to perform backpropagation with the proposed formulation and provides guarantees for robustness and convergence. \n The idea of clustering the inputs on the fly is interesting compared to the reducing the size of the input to NN only, or compressing the network and fixing its parameters. In TREC for every input data, we perform individual clustering at every stage of the forward pass to reduce further convolution operations. As a result, the redundancy is computed at every stage of the inference which makes it more flexible than reducing the size of the input to NN or decreasing the size of the network. \n\nThe analysis of complexity, convergence, and robustness is giving better insight into the theoretical guarantees of the algorithm. \n\nThe paper seems to heavily rely on the DeepReuse. It's following the same ideas, for example, using the same method for “clustering”, LSH hashing, same parameters L and H. Please state clearly the differences between the two approaches, and what elements of your method cause the improvements in TREC as compared to DeepReuse (also, I hardly see the randomness as the big disadvantage of DeepReuse).\n In my understanding by clustering, we reduce the number of convolution operations (or scalar products given the GEMM). Theoretically, it sounds promising but in practice, can you actually obtain any speed-ups? Do you also decrease the number of channels of your model (maybe if you remove all the convolutions for a given channel)? Are you planning to release the code?\n\nI'm not sure where the terms come from. For example, in pruning literature, I never heard of the term “lasting redundancy”. Also, the phrases “eliminate 96% of the transient redundancy” can be puzzling without the proper definition in terms of FLOPs, for example. I'd be cautious with using new terms, which may create confusion.\n Little is said about the work’s limitations. \n",
" This paper proposes, implements and tests a method to remove transient (data-point dependent) redundancy in the prpagation and backpropagation of convolutional neural networks. Originality: medium\nThis paper proposes the use LSH for removing (transient) data-point dependent computation cost of neural networks. While the idea has been used several times earlier (eg. in the Reformer [https://arxiv.org/abs/2001.04451]), its application for convolutional networks poses several technical challenges. While it is not too hard to see the solution to those challenges, it is still important and useful for giving a clear recipe for using them in practice and measuring their effectiveness in a real world scenario. In that sense, I think this paper makes a real contribution. Also the paper gives some theoretical guarantees in the limit which are new.\n\nQuality: medium\nThis work proposes an LHS-based method to remove redundant computations in convolutional network. The method needs to do online clustering on the edge device, which results in a data-dependent reduction of computations. The paper reports an \"acceleration\" of the layers applying this method by up to a 18X speedup. However this is not backed by a clear table of actual numbers, nor its dependence on batch size, neither ablation analysis or even mentioning the device on which this acceleration was reached. More thorough numbers are reported on the inference of the whole network for micro-controllers and the paper reports a total of 3.61X speedup on microcontrollers for the CIFAR-10X network, which would be impressive on its own and a good start.\nHowever, the lack of evidence on data sets with larger than 32x32 images is disappointing and may question the relevance of the approach for real world use cases.\nThe theoretical analysis is nice and seems sound to me, but it works only in the limit and does not come with actual bounds, so its relevance for practical application seems questionable. Also it is unclear how far the theoretical bounds are from the real situation and whether the practical situations are within the range of the theoretical results being of any relevance whatsoever.\n\nClarity: medium\nThere are several unclear points in this paper:\n- The motivation for implementing backpropagation at all is unclear: Do we want to train on the edge device? Or do we want to fine-tune the model for the edge device such that it performs better after optimization than the original network? I assume the latter, but this questions are left for the reader, while the paper puts a lot of emphasis on making backpropagation work. In the latter case (the network finetuned for higher quality evaluation), the paper lacks clear evidence and ablation analysis for the need/efficacy of that appraoch.\n\nSignificance: medium\nIf the method works it could give a clear recipe to improve the latency of CNN inference on certain compute-constrained edge devices. However, it is likely that with inroads of deep learning, specialized hardware will be used in most use cases, which might remove the need for such methods in the not too far future.\n\nGenerally, I think that the paper gives a valuable baseline and recipe for reducing the latency of inference of CNN on microcontrollers, but the experimental evidence given in this paper is not very conclusive, neither is the relevance of the theoretical evidence. I would raise my score in the presence of stronger practical evidence (evaluation on a dataset with larger images) and better motivation(/ablation analysis, if applies) for the backpropagation-related sections. - Do we want to train on the edge device? Or do we want to fine-tune the model for the edge device such that it performs better after optimization than the original network?\n- Any empirical evidence on data-sets besides CIFAR-10?\n- Do the theoretical evidence give rise to concrete, quantitative bounds? Yes the paper addresses the limitations of the methods. I can't see any particular negative societal impact specific to this line of work besides generic concerns that would apply to any methods that improve the efficiency of neural network evaluation in general.",
" This paper proposes a principled way to detect and avoid transient redundancy for CNN inference. By introducing a new CNN operator named TREC, it fuses the detection and avoidance of transient redundancy into CNN, making them part of its inherent architecture. TREC comprises three components in series on top of Im2col+GEMM based convolution: LSH-based clustering that reduces the size of the input to the conversion, matrix multiplication of weights and the compressed input, and a recovery step that restores results to the original GEMM output sizes. Strength\n- By introducing a new CNN operator named TREC, it fuses the detection and avoidance of transient 40 redundancy into CNN, making them part of its inherent architecture\n- TREC is compatible with both forward and backward propagations, enabling a plug-and-play replacement for convolutional layers in mainstream CNNs without additional effort.\n- Theoretical analysis on the convergence is also provided.\n\nWeakness\n- The introduction of transient redundancy is not quite clear, it would be great to have some intuitive illustrations or numbers to demonstrate the redundancy. Why TREC makes the design and how it makes inference faster in sec 3.1 is also not quite clear.\n- The evaluated CNN architecture seems not representative for common usage e.g., resnet.\n- It is not clear whether architectures with BN layers can use TREC conv layers and how BN layers to be changed with the changed input feature maps. - “Experiments show that by removing 96.22% transient redundancy, TREC achieves an average of 4.28x speedup compared to the conventional convolution operator.” What is the maximal possible speedup if 100% transient redundancy is removed?\n\n- How would the BN layers (if the architecture has BN) be accommodated with such reduced input feature maps? No limitation is discussed.",
" The paper proposes a convolution operation, dubbed TREC, with transient redundancy that is exploited during inference to reduce computation. Specifically, the authors use LSH to cluster layer inputs in the forward pass and use the cluster centers in the weight multiplication to compute the outputs. The authors propose a differentiable implementation for the LSH-based clustering which allows for training the TREC, thereby improving the accuracy compared to prior methods. In addition to empirical evaluations, the authors provide a theoretical analysis of the convergence and robustness of TREC. Strengths:\n- The paper is well written and the ideas are clearly explained.\n- The combination of theoretical evidence with empirical results helps strengthen the claims.\n- The ideas for making the clustering differentiable are interesting.\n\nWeaknesses:\n- The LSH-based clustering is previously proposed in the Deep Reuse paper. Therefore, the main contribution here is enabling backward propagation with a differentiable implementation for the clustering. The authors should explicitly clarify this in the paper and give proper reference to the prior art.\n- The method is only evaluated on small-scale benchmarks (CIFAR-10) while prior work includes results on ImageNet. As such, a question rises regarding the applicability of the proposed method when evaluating larger scale datasets. Please see the weaknesses above. \n\nQuestions and limitations:\n- The reported accuracies for the Deep reuse paper are quite different from what is reported in the original paper (they claim no accuracy loss). Can the authors please clarify on the main differences in the evaluation setup that causes this difference?\n\n- Since the prior work Deep reuse, which is using a quite similar method, has evaluations on ImageNet, the authors are encouraged to add these evaluations to show that the scope of the proposed method is not limited to small datasets and the accuracy gains are persistent in more challenging benchmarks. The authors have not discussed the limitations and potential negative social impact of their work"
] | [
-1,
-1,
-1,
-1,
5,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"F2oQgjf50EW",
"Zfu5kgabGW",
"oah_EQztILA",
"SkPpnLxxwox",
"nips_2022_FNzLe2-ppRO",
"nips_2022_FNzLe2-ppRO",
"nips_2022_FNzLe2-ppRO",
"nips_2022_FNzLe2-ppRO"
] |
nips_2022__j8yVIyp27Q | Bidirectional Learning for Offline Infinite-width Model-based Optimization | In offline model-based optimization, we strive to maximize a black-box objective function by only leveraging a static dataset of designs and their scores. This problem setting arises in numerous fields including the design of materials, robots, DNAs, proteins, etc. Recent approaches train a deep neural network (DNN) model on the static dataset to act as a proxy function, and then perform gradient ascent on the existing designs to obtain potentially high-scoring designs. This methodology frequently suffers from the out-of-distribution problem where the proxy function often returns adversarial designs. To mitigate this problem, we propose $\textit{\textbf{B}i\textbf{D}irectional learning for offline \textbf{I}nfinite-width model-based optimization}~(\textbf{BDI})$. BDI consists of two mappings: the forward mapping leverages the static dataset to predict the scores of the high-scoring designs, and the backward mapping leverages the high-scoring designs to predict the scores of the static dataset. The backward mapping, neglected in previous work, can distill more information of the static dataset into the high-scoring designs, which effectively mitigates the out-of-distribution problem. Yet, for a finite-width DNN model, the loss function of the backward mapping is intractable and only has an approximate form, which leads to a significant deterioration of the design quality. We thus adopt an infinite-width DNN model and propose to employ the corresponding neural tangent kernel to yield a closed-form loss for more accurate design updates. Experiments on various tasks verify the effectiveness of BDI. The code is available [here](https://github.com/GGchen1997/BDI). | Accept | This paper studies Offline Model-Based Optimization. This paper proposes a gradient-based method for solving Offline MBO problems using infinite-width Deep learning models.
The key novelty of the paper is in proposed use of a distillation objective to constrain the optimized design-score pairs.
All three reviewers identify the novelty of the problem and the approach.
The paper also presents strong empirical evaluation on standard benchmarks.
The rebuttal discusses yielded constructive changes in the paper, and the authors are expected to account for the discussion and suggestions in the next iteration of the manuscipt.
The AC concurs with the reviews and the discussion thereafter. | train | [
"4vaLL23PTEa",
"3eoHuS5QRM-",
"WIOQV0gZiUn",
"j8dIBmdFIiO",
"JO4prL8inPn",
"6eG9P-Z-oH",
"C5UbIvoNAdF",
"muJrF6zHqN",
"Q-O_ye6F23X",
"r4WTCDYQRif",
"0PilWkDcAX",
"ogzn2WT31Ml",
"ZebZ8nM1VI",
"xaGD_SS9OmF",
"qqfMFrJZBkp",
"bFd5Cx2gSS4",
"2KuycUBCl-m",
"_M9owUpMXf",
"bWVQzEOkk_e"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your great questions, which make our paper stronger.\n\nWe will certainly include the main points of our discussion in an improved version.",
" Thanks for responding to my questions at length, each of my major concerns has been resolved. In my current evaluation of the paper, I intend to increase my rating, pending further discussion with the other reviewers and the Area Chair.",
" Thank you again for your constructive feedback and appreciation.\n \nWe will certainly include our main points of the above response in an improved version. ",
" Thank you for the response. The response has properly addressed my concerns. I will increase my score.",
" Thanks for your detailed review and the questions you raised. We tried to thoroughly elaborate on your raised questions regarding the motivation of backward mapping. Additionally, we have also conducted more experiments to explain the effectiveness of backward mapping. \n\nCould you please kindly provide us with more feedback if we did not fully address your concerns? Thank you and looking forward to further exchanging our thoughts with you.",
" Thank you for your detailed review and your constructive feedback. We tried to address all your listed concerns above, mainly including: \n \n- We have discussed the relationship of our work to one of the related work of BCQ; \n- We have further investigated the effectiveness of backward mapping when considering the sequential nature;\n- We have clarified our evaluation setting;\n- We have explained the reason behind the small discrepancy between ours and previously published results (MIN and COMs); \n- We have reported the results after changing regression to pure maximization;\n- We have conducted the ablation study for the weight of the distillation term.\n\nCould you please further elaborate if anything is still not clear? Thank you and looking forward for your feedback.\n",
" Thank you again for your constructive feedback. As we mentioned in the previous response, we will include your valuable suggestion in an improved draft.",
" I'd like to thank the authors for their response. I will keep my rating of 7.",
" > 2. How does the backward mapping help to distill information into the high-scoring designs? \n Though the authors provide ablation studies to show that the backward mapping does work, \n the authors may want to provide more analyses/visualization to show why the backward mapping works.\n\nWe discuss the motivation of the backward mapping in L34-L57. As shown in Figure 1, pure gradient ascent on the proxy function yields the seemingly high-scoring design $p_{grad}$ with a low ground-truth score. From a model perspective, previous work attempted to train a better proxy closer to the ground-truth to mitigate the out-of-distribution issue. Instead, from a data perspective, together with the forward mapping, our backward mapping can align the high-scoring design with the offline dataset, which constrains the high-scoring design to distill more information from the offline dataset. This leads to a better high-scoring design.\n\nWe discuss the inner mathematical mechanism in L163-L168 **Analysis of M=1**. \nIn our experiments, we aim for one high-scoring design $M=1$ and by minimizing Eq.(13) we can find a high-scoring design xh such that any design in the offline dataset similar to the high scoring design is encouraged to have a high score prediction. In this way, xh tries to incorporate as many high-scoring features from the offline dataset as possible. For the M=1 case, we can observe that a good xh will both (1) have a high predicted score based on the forward mapping; and (2) be able to predict (relatively) high scores for multiple (relatively) high scoring designs in the offline dataset based on backward mapping.\n\nAs per your suggestion, we also provide additional analysis to verify and explain the effectiveness of the backward mapping. We use the NTK k(. , .) to measure the similarity between two points and compute the similarity between the generated high-scoring design xh and the high-scoring designs X in the offline dataset. This is defined as: simi(xh, X) = mean(k(xh, X)). We report the results in the table below. We have added this part into Appendix A.8.\n\n| Task | BDI | forward map | backward map\n|:----:|:----:|:----:|:----:|\n|Ant | $0.0752$ | $0.0739$ | $0.0808$ |\n|TFB | $0.2646$ | $0.2395$ | $0.3570$ |\n\nWe find that simi(xhback, X) > simi(xhbdi, X) > simi(xhforward, X). This can be explained as follows. Since the backward mapping extracts the high-scoring features from the offline dataset, xhback is the closest to the high-scoring designs X in the offline dataset and thus simi(xhbdi, X) is large. While encouraging the design xhforward to explore a high score, the forward mapping alone leads to a design that is far from the offline dataset and thus simi(xhforward, X) is the smallest. BDI can explore a high score (forward mapping) and stay close to the offline dataset (backward mapping), which leads to simi(xhbdi, X) between simi(xhforward, X) and simi(xhback, X).\n \n > 3. I can not understand the predefined score used in the backward mapping. \n In forward mapping, a high can encourage the algorithm to generate with high scores.\n In backward mapping, what is the purpose of training with a virtual score ?\n\n\nThe formulation of BDI effectively supposes that there is a design xh with the predefined target score yh. It aims to identify the best xh so that using the NTK: (a) the offline dataset can predict the predefined target score yh for the selected xh (forward mapping); and (b) the selected xh, with associated label yh, can predict the labels of (relatively) high scoring designs in the offline dataset (backward mapping). The yh value thus plays a role in both forward and backward mapping; our experiments demonstrate that performance is robust to the specific choice of yh. \n\n\n\n### Overall\n\n**Does the above reply address your concerns? Thank you again for your instructive review and thoughtful feedback. We hope that there is the opportunity for further discussion with you during the rebuttal phase.**",
" ### General Reply\n\nMany thanks for your valuable and constructive comments on clarifying, correcting, and improving the materials in this paper! We will carefully revise the paper according to your comments as explained below.\n\n### Strengths And Weaknesses:\n\n\n > 1. What is the motivation for using the backward mapping? The authors may want to provide \n an intuitive explanation of why we need a backward mapping besides the forward mapping.\n\n\nThe backward mapping is inspired by data distillation [4] and the Decision Transformer [3]. As we discuss in L329-L335 of the related work section, data distillation [4] keeps the model fixed and attempts to distill the knowledge from a large training dataset into a small one. The experiments in [4] are on MNIST (and CIFAR) with 10 (100 for CIFAR) distilled images with predefined labels being digits in the range 0-9 (categories such as dog, cat...for CIFAR). We observe that the distilled images extract the features of the corresponding digit (dog, cat...for CIFAR) from the training dataset. \n\n [3] Chen et al. Decision transformer: reinforcement learning via sequence modeling. Proc. Adv. Neur. Inf. Proc. Syst, 2021.\n [4] Wang et al. Dataset distillation. arXiv preprint, 2018.\n\nThis observation motivates us to consider what the distilled sample could be if the label becomes a large virtual predefined target score yh in the offline model-based optimization setting. We compare data distillation and backward mapping in the following table, which should help explain our motivation. \n\n| Comparison | Data distillation [4] | Backward mapping |\n|:----:|:----:|:----:|\n| Data x | image in the training dataset D | design in the offline dataset D | \n| Label type y_t | 0-9 for MNIST (dog, cat...for CIFAR); | some measurement of protein/dna/robot... |\n| Label value y_v | within the training data | larger than the max of the offline dataset | \n| Task type | classification | regression |\n| Task objective | distill $\\tilde{x}$ to predict D | distill xh to incorporate high-scoring features of D |\n\nThe introduction of the large virtual predefined target score yh is inspired by Decision Transformer, in which a reward (score) is predefined for reinforcement learning, as we discuss in L64-L66.\n\nForward mapping alone suffers from the out-of-distribution issue as we discuss in L34-L37. We will add the above table and the following sentences to the paper to address the suggestion \"to provide an intuitive explanation of why we need a backward mapping besides the forward mapping\": \n\n Forward mapping alone suffers from the out-of-distribution issue. The backward mapping leverages the high-scoring design to predict the offline dataset and thus distills the information of the offline dataset into the high-scoring design, which can mitigate the out-of-distribution issue. ",
" ### General Reply\n\nMany thanks for your valuable and constructive comments on clarifying, correcting, and improving the materials in this paper! We will carefully revise the paper according to your comments as explained below.\n\n### Strengths And Weaknesses\n\n >This paper lacks a discussion on the relationship between the directional learning and a conservative regularization. \n >I believe that adding the backward mapping loss is similar to a conservative regularizer in the sense that they both aim to alleviate the distributional shift between the predicted optimal designs and the dataset designs. \n > I hope that the authors could provide some discussions on their similarity/difference and why learning the backward mapping could be a better alternative than conservative regularization methods.\n\n\n#### Compare with conservative regularization\n\nThank you for pointing this out! We agree that both backward mapping and the regularization term in COMs can mitigate the out-of-distribution issue. The key difference is that COMs achieves this from a model perspective while BDI achieves this from a data perspective.\n\nCompared with COMs, the backward mapping in BDI has two advantages:\n\n1. Effectiveness: Real-world offline model-based optimization often involves small-scale datasets since the labeling cost of protein/dna/robot is very high. BDI is built on the neural tangent kernel, which generally performs better than finite neural networks on small-scale datasets [1]. COMs is built on a finite neural network and it is non-trivial to modify COMs to a neural tangent kernel version.\n\n2. Generalization: The infinitely wide deep neural network (the neural tangent kernel) represents a broad class of DNNs [2], which enhances the generalization of the high-scoring designs. It is non-trivial to apply the infinitely wide DNN in COMs.\n\n [1] Arora et al. Harnessing the power of infinitely wide deep nets on small-data tasks. Int. Conf. Learning Representations, 2020.\n\n [2] Yuan et al. Neural tangent generalization attacks. Int. Conf. on Machine Learning, 2021.\n\nWe will add this part to the section of experiments.\n\n\n### Questions\n\n#### Constant y_h\n\n >Could the authors explain why using a constant y_h could work? Would it be possible to \n further improve the proposed method with a more sophisticated design of where the values are \n different for each high-scoring sample?\n\nWe set the constant $y_h$ to be larger than the maximal value of the offline dataset and thus the corresponding design can extract the high-scoring features like we analyze in L163-L168.\n\nWe investigate this further in Appendix A.2 (b), where we select 1) multiple relatively high-scoring ground-truth designs from the offline dataset; and 2) one learnable high-scoring design with the predefined target score yh=10 to form the set of high-scoring designs. In this case, different high-scoring samples have different yh. This setting also yields good results.\n\nThank you for the suggestion regarding different yh. We believe it would be possible to further improve the proposed method with a more sophisticated design in which the values are different for each high-scoring sample. We could strive to optimize over the values of yh (constraining them to be above a threshold). This is more complicated than the current approach, but it is a valuable suggestion for future work, and we will add a comment to this effect in the paper. \n\n### Overall\n\n**Thank you again for your instructive feedback on our paper. Please let us know if we have resolved your concern or if you have any further questions.**",
" ### Overall\n\n**Does the above reply address your concerns? Thank you again for your instructive review and feedback. We very much appreciate your careful review of the paper and look forward to further exchange with you during the rebuttal phase.**",
" ### Questions\n\n > (1) There appears to be a discrepancy between a small fraction of the results presented\n > in the main performance table, and those reported in other recent papers.\n > In particular, the performance of MIN and COMs are different from those reported in design-bench\n > (Trabucco, B. et al., 2022) on the following tasks: Ant Morphology, DKitty Morphology, and\n > Hopper Controller.\n > Could the authors investigate this and explain the cause of this difference?\n\n\n#### 1. Small discrepancy between ours and previous published results\nFor the baseline MIN, we run the code released in (Trabucco, B. et al., 2022) and we conjecture that the small difference arises due to different random seeds and/or different machines.\n\nThe baseline COMs belongs to the gradient updating category. For all methods in this category, the time step $T$ is set as $200$ in our work but as $50$ in (Trabucco, B. et al., 2022). We set $T$ to be large following ROMA [11] \"set the number of solution update to be large enough\". Other factors like neural network architectures can also lead to result differences. Compared with the COMs results reported in (Trabucco, B. et al., 2022), our BDI achieves better results on four tasks and the same result on the GFP task. The results for the D’Kitty Morphology task and the Hopper Controller task are slightly worse. Using the reported results in (Trabucco, B. et al., 2022), our BDI is still the best performing method and outperforms COMs in terms rank mean(2.1/11 < 3.9/11) and rank median (2/11 < 3/11).\n\nWe will report the original performances of MIN and COMs at (Trabucco, B. et al., 2022) in our final version.\n\n\nIt is worth emphasizing that there is no validation set in offline model-based optimization (we are searching for designs outside the training set and in practice do not have access to a performance oracle) so we cannot adopt standard approaches for determining hyperparameters [12] such as the number of gradient steps. \n\n [11] Yu et al. Roma: Robust model adaptation for offline model-based optimization. NeurIPS, 2021.\n [12] Trabucco et al. Design-bench: benchmarks for data-driven offline model-based optimization. arXiv preprint, 2022.\n\n\n > (2) The optimization objective in Equation (4) is chosen as a regression objective,\n > rather than purely maximizing the predictions of the learned objective function f_{\\theta} (X). \n > How are the results affected when this equation is changed to pure maximization, \n > rather than regression?\n\n\n#### 2. Change to pure maximization\n\nWe change the regression to pure maximization (Pure Maxi) the predictions of the learned objective function and report the experimental results on all tasks. We have added this part into Appendix A.6.\n\n| Mode/Task | SuperC | Ant | D'Kitty | Hopper | GFP | TFB | UTR | \n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\n| Our BDI | $0.520$ | $0.962$ | $0.941$ | $1.989$ | $0.864$ | $0.973$ | $0.760$ | \n| Pure Maxi | $0.520$ | $0.911$ | $0.937$ | $1.830$ | $0.864$ | $0.956$ | $0.734$ | \n| Best Baseline | $0.503$ | $1.214$ | $0.942$ | $1.959$ | $0.865$ | $0.953$ | $0.707$ |\n| Best Result | **0.520** | **1.214** | **0.942** | **1.989** | **0.865** | **0.973** | **0.760** |\n\n\nAs we can see, the Pure Maxi results are generally worse than the regression form. We conjecture that the advantage of the regression form arises due to the consistency between the forward mapping and the backward mapping since both use the same predefined target score yh.\n\n > (3) The objective functions in Equation (8) appear to be assigned equal weight. \n > How are the results affected by ablating the weight of the distillation term (Equation 6) in the loss function?\n\n\n#### 3. Equal weight\n\nWe simply assign the forward mapping and the backward mapping equal weight since the two terms are symmetric and thus viewed as equally important. To answer your question, we view the weight term of the backward mapping as a hyperparameter and study the hyperparameter sensitivity on Ant and TFB like we did in Sec 4.6. We have added this part into Appendix A.7.\n\n| Weight | Ant | TFB | \n|:----:|:----:|:----:|\n| $0.00$ | $0.933$ | $0.973$ |\n| $0.50$ | $0.950$ | $0.973$ |\n| $0.80$ | $0.970$ | $0.973$ |\n| $0.90$ | $0.944$ | $0.973$ | \n| **1.00** | $0.962$ | $0.973$ | \n| $1.10$ | $0.943$ | $0.973$ |\n| $1.20$ | $0.956$ | $0.973$ | \n| $1.50$ | $0.941$ | $0.973$ |\n| $2.00$ | $0.938$ | $0.973$ | \n\nFor the ANT task, we find the results generally improve for any weight greater than zero. This demonstrates the effectiveness of the backward mapping.\n\nFor the TFB task, we find the behavior of BDI is identical for different weight terms. One possible reason is that BDI here does not consider the sequential nature of DNA and thus the backward mapping cannot provide significant performance gain as we have discussed before.",
" ### Clarity\n\n > Certain experimental details are not clear from the manuscript.\n > For example, on line 194-195 the authors write “we choose the top N = 128 most promising\n > designs for each method,” which could suggest more than N = 128 designs are generated,\n > and the number is downsampled to 128 using some indicator for how promising the designs are.\n > If such a post-selection mechanism is used, additional details should be included\n > in the paper for reproducibility. Such a mechanism should query the ground truth\n > objective function only N = 128 times to compute 100th percentile\n > and 50th percentile statistics as per design-bench (Trabucco, B. et al., 2022).\n\n\nWe apologize for the ambiguous wording. We follow exactly the same procedure as presented in Design-bench (Trabucco, B. et al., 2022) [10], and we only generate the top $128$ candidates designs. We will clarify this part in our final version to avoid possible misunderstanding.\n\n\n [10] Trabucco et al. Conservative objective models for effective offline model-based optimization. Int. Conf. Learning Rep, 2021.",
" ### Quality\n\n > For example, on line 278-279 of the manuscript, the authors write “we observe\n > that the backward mapping is more effective for continuous tasks, possibly because we do not\n > model the sequential nature of DNA and protein sequences.” This is a helpful intuition,\n > and could be improved if the authors tested this hypothesis with a model\n > that does “model the sequential nature of DNA and protein sequences.”\n > Quality may also be improved by addressing the questions listed below.\n\nThanks for your suggestions! We conduct additional experiments to verify our intuition in L278-L279.\n\nWe adopt DNABert [1] and ProtBert [2] to model the sequential nature of DNA/protein. We conduct an ablation study, removing the backward mapping to verify its effectiveness. We also remove the forward mapping to assess its importance. The following experiments demonstrate our intuition: backward mapping matters. We have added this part into Appendix A.5.\n\nFor DNA experiments, we consider the task with an exact oracle: TFBind8, and obtain the following 100th percentile results. As we can see in the following table, both the backward mapping and the forward mapping matter in BDI. We further consider a harder task setting TFBind8(reduce): reduce the size of the offline dataset from 32896 to 5000. The results verify the effectiveness and robustness of bidirectional mappings.\n\n| DNA Task | BDI | w/o $\\mathcal{L}_{l2h}$ | w/o $\\mathcal{L}_{h2l}$\n|:----:|:----:|:----:|:----:|\n|TFBind8 | **0.986** | $0.954$ | $0.952$ |\n|TFBind8(reduce) | **0.957** | $0.849$ | $0.900$ |\n\nFor protein experiments, we first consider the GFP task we used in our paper, and obtain the following results:\n\n| Protein Task | BDI | w/o $\\mathcal{L}_{l2h}$ | w/o $\\mathcal{L}_{h2l}$\n|:----:|:----:|:----:|:----:|\n|GFP | **0.864** | $0.864$ | $0.864$ |\n\nThe results are indistinguishable. We note that [3] observes for the GFP task \"indistinguishable results across all methods\". To further investigate the behavior, we introduce $5$ new protein tasks used in recent works: LACT, AMP, ALIP, LEV, and SUMO. We now present the results and task details. \n\n| Protein Task | BDI | w/o $\\mathcal{L}_{l2h}$ | w/o $\\mathcal{L}_{h2l}$\n|:----:|:----:|:----:|:----:|\n| LACT | **0.820** | $-0.134$ | $0.550$ | \n| AMP | **0.772** | $0.765$ | $0.753$ | \n| ALIP | **0.813** | $0.659$ | $0.699$ |\n| LEV | **0.859** | $0.577$ | $0.556$ | \n| SUMO | **1.489** | $0.773$ | $0.596$ |\n\nThese results demonstrate the effectiveness of our bidirectional mappings.\n\nLACT: We measure the thermodynamic stability of the TEM-1 β-Lactamase protein. The oracle is trained on $17857$ samples from [4][5]. The bottom half of these samples are used for offline algorithms.\n\nAMP: Antimicrobial peptides are short protein sequences that act against pathogens. We use the $6760$ AMPs from [6] to train the oracle and the bottom half of the AMPs for the offline algorithms.\n\nALIP: We measure the enzyme activity of the Aliphatic Amide Hydrolase sequence following [7]. We use the $6629$ samples from [7] to train the oracle and the bottom half of them for the offline algorithms. \n\nLEV: In an ATP-dependent reaction, LG can be converted by Levoglucosan kinase to the glucose-6-phosphate. Following [8], we measure the enzyme activity of this reaction. All $7891$ protein sequences are used to train the oracle and the bottom half are for the offline algorithms.\n\nSUMO: Following [9], we measure the growth rescue rate of human SUMO E2 conjugase protein. Around 2000 samples are used to train the oracle and the bottom half are used for the offline algorithms.\n\n [1] Ji et al. DNABERT: pre-trained bidirectional encoder representations from transformers model for DNA-language in genome. Bioinformatics, 2021.\n [2] Ahmed et al. ProtTrans: towards cracking the language of lifes code through self-supervised deep learning and high performance computing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.\n [3] Trabucco et al. Design-bench: benchmarks for data-driven offline model-based optimization. arXiv preprint, 2022.\n [4] CE et al. Pervasive pairwise intragenic epistasis among sequential mutations in tem1 β-lactamase. Journal of Molecular Biology, 2019.\n [5] Firnberg et al. A comprehensive, high-resolution map of a gene’s fitness landscape. Molecular Biology and Evolution, 2014.\n [6] Zhang et al. Unifying likelihood-free inference with black-box optimization and beyond. Int. Conf. Learning Representations, 2022.\n [7] Wrenbeck et al. Single mutation fitness landscapes for an enzyme on multiple substrates reveal specificity is globally encoded. Nature Communications, 2017.\n [8] Klesmith et al. Comprehensive sequence-flux mapping of a levoglucosan utilization pathway in e.coli. ACS Synthetic Biology, 2015.\n [9] Weile et al. A framework for exhaustively mapping functional missense variants. Molecular Systems Biology, 2017.",
" ### General Reply\n\nMany thanks for your valuable and constructive comments on clarifying, correcting, and improving the materials in this paper! We will carefully revise the paper according to your comments as explained below.\n\n### Originality\n\nThank you for drawing our attention to the interesting BCQ work which used regularization on the optimized samples in the offline RL setting. We will add the following sentences in the related work section to acknowledge the connection:\n\n > Similar to our backward mapping constraining designs, BCQ (Fujimoto, S. et al., 2019) \n > constrains the state-action pairs to be contained in the support of the static dataset. \n > Our proposed backward mapping is different in its implementation and also enables the model to generate designs outside the training distribution.",
" This paper investigates the subject of Offline Model-Based Optimization, which involves processing a static dataset of designs X and function evaluations Y in order to solve the optimization problem X* = \\argmax_{X} f(X) for an unknown function f. This paper proposes a gradient-based method for solving Offline MBO problems using infinite-width DNN models. The approach facilitates efficiently optimizing designs X via a proposed bidirectional objective that encourages designs to be found that not only achieve high performance under an approximation of the objective function f_{\\theta} (X), which is referred to as the forward mapping in the paper, but jointly maximize a distillation objective that encourages optimized design-score pairs to be informative about the characteristics of the true objective function (the backward mapping).\n\nOne key novelty in the paper is the proposed use of a distillation objective to constrain the optimized design-score pairs, rather than regularizing the DNN model used for optimization. Results in the paper appear to show the effectiveness of this principle on a standard benchmarking framework for Offline MBO using a comprehensive set of baselines. Originality:\n\nThe subject of mitigating out-of-distribution samples (ie, design-score pairs) using gradient-based Offline Model-Based Optimization techniques has been the subject of multiple papers. The subject itself is not novel, but the solution proposed by the authors (regularizing the optimized samples rather than the model) is an original proposition in the Offline MBO domain. Though different in execution, a similarly inspired technique has been proposed in the Offline RL setting, namely BCQ (Fujimoto, S. et al., 2019), which constrains the (s’, a’) pairs sampled during bellman backups \\max_{a’} Q(s’, a’) to be contained in the support of the static dataset.\n\nBoth papers explore constraints (either implicitly or explicitly) on the data being optimized (actions in the case of BCQ, and designs in the case of BDI), rather than model regularization. However, it should be noted that the implementation of the constraint is fundamentally different between these two approaches, and BDI’s implementation is original. Dataset distillation enforces a constraint on the data that is an appealing alternative to the support-based constraint seen in BCQ because it lets the model generate designs outside the training distribution.\n\n\nQuality:\n\n\nThe experimental setup in the paper adheres to the evaluation protocol established in prior benchmarking papers, which aids in the reproducibility of the paper. Relevant baselines from the literature, including several new methods, and multiple classical methods, adapted to the Offline MBO setting, are reported. These factors improve the quality of the paper’s main evaluation. Certain relevant ablations are not included (see my questions below), and a small number of claims and speculations made in the paper are not fully supported by evidence. For example, on line 278-279 of the manuscript, the authors write “we observe that the backward mapping is more effective for continuous tasks, possibly because we do not model the sequential nature of DNA and protein sequences.” This is a helpful intuition, and could be improved if the authors tested this hypothesis with a model that does “model the sequential nature of DNA and protein sequences.” Quality may also be improved by addressing the questions listed below.\n\nClarity:\n\n\nCertain experimental details are not clear from the manuscript. For example, on line 194-195 the authors write “we choose the top N = 128 most promising designs for each method,” which could suggest more than N = 128 designs are generated, and the number is downsampled to 128 using some indicator for how promising the designs are. If such a post-selection mechanism is used, additional details should be included in the paper for reproducibility. Such a mechanism should query the ground truth objective function only N = 128 times to compute 100th percentile and 50th percentile statistics as per design-bench (Trabucco, B. et al., 2022).\n\nSignificance:\n\n\nMitigating the out-of-distribution problem in Offline MBO has significance as a research area, due to its compelling applications in real-world design problems. The results in this paper show the proposed approach is quite effective in practice at finding high-performing designs. (1) There appears to be a discrepancy between a small fraction of the results presented in the main performance table, and those reported in other recent papers. In particular, the performance of MIN and COMs are different from those reported in design-bench (Trabucco, B. et al., 2022) on the following tasks: Ant Morphology, DKitty Morphology, and Hopper Controller. Could the authors investigate this and explain the cause of this difference?\n\n(2) The optimization objective in Equation (4) is chosen as a regression objective, rather than purely maximizing the predictions of the learned objective function f_{\\theta} (X). How are the results affected when this equation is changed to pure maximization, rather than regression?\n\n(3) The objective functions in Equation (8) appear to be assigned equal weight. How are the results affected by ablating the weight of the distillation term (Equation 6) in the loss function? The authors have sufficiently addressed these.",
" Offline model-based optimization could suffer from the distributional shift issue where the model bias could be exploited by the optimization process to output poor designs. Most previous offline model-based optimization aims to solve the problem by introducing loss function terms that promote distribution matching between the learned model outputs and the ground-truth. Unlike previous works, this paper introduces a directional learning approach based on neural tangent kernel to facilitate the distribution matching. By encouraging learning accurate mappings from low-scoring designs to high-scoring designs and vice versa, the proposed method could alleviate the issue of outputting unseen designs with significantly overestimated scores. The proposed method achieved the best performance on six out of seven widely studied offline model-based optimization task. ### Strengths\n1. The proposed approach is well motivated and the writing is easy to follow.\n2. The authors provide a good insight on why the bidirectional learning helps mitigate the distributional shift issue in 3.3.\n3. Experiment results show strong performance of the proposed method.\n\n### Weakness\nThis paper lacks a discussion on the relationship between the directional learning and a conservative regularization. I believe that adding the backward mapping loss is similar to a conservative regularizer in the sense that they both aim to alleviate the distributional shift between the predicted optimal designs and the dataset designs. I hope that the authors could provide some discussions on their similarity/difference and why learning the backward mapping could be a better alternative than conservative regularization methods.\n Could the authors explain why using a constant $y_h$ could work? Would it be possible to further improve the proposed method with a more sophisticated design of $y_h$ where the values are different for each high-scoring sample? The authors have addressed the limitations.",
" This paper proposes bidirectional learning for offline infinite-width model-based optimization (BDI) to address the out-of-distribution problem. BDI consists of a forward mapping and a backward mapping. The authors claim that the backward mapping can distill more information into high-scoring designs. Experiments show that BDI achieves state-of-the-art performance in both continuous and discrete tasks. Originality:\n\nThe idea of BDI is novel. This paper introduces the idea of neural tangent kernels into offline model-based optimization. Moreover, the authors propose a novel backward mapping, significantly improving the performance of model-based optimization.\n\nQuality:\n\nThe comparisons are thorough. Results show that BDI outperforms baselines in most tasks, and all component in BDI is useful. The use of neural tangent kernels in BDI is based on solid theoretical motivation. Nevertheless, I am confused about the backward mapping.\n1. What is the motivation for using the backward mapping? The authors may want to provide an intuitive explanation of why we need a backward mapping besides the forward mapping.\n2. How does the backward mapping help to distill information into the high-scoring designs? Though the authors provide ablation studies to show that the backward mapping does work, the authors may want to provide more analyses/visualization to show why the backward mapping works.\n3. I can not understand the predefined score $y_h$ used in the backward mapping. In forward mapping, a high $y_h$ can encourage the algorithm to generate $X_h$ with high scores. In backward mapping, what is the purpose of training $f_\\theta^h$ with a virtual score $y_h$?\n\n\n\nClarity:\n\nMost of this paper is easy to follow. Nevertheless, I am confused about the backward mapping. The authors may want to discuss the backward mapping more, especially its motivation and how it works.\n\nSignificance:\n\nThe backward mapping is novel and can provide a significant performance improvement. Please refer to the \"Strengths And Weaknesses\" part. The authors have adequately addressed the limitations and potential negative societal impact of their work."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"3eoHuS5QRM-",
"6eG9P-Z-oH",
"j8dIBmdFIiO",
"JO4prL8inPn",
"Q-O_ye6F23X",
"ogzn2WT31Ml",
"muJrF6zHqN",
"0PilWkDcAX",
"r4WTCDYQRif",
"bWVQzEOkk_e",
"_M9owUpMXf",
"ZebZ8nM1VI",
"xaGD_SS9OmF",
"qqfMFrJZBkp",
"bFd5Cx2gSS4",
"2KuycUBCl-m",
"nips_2022__j8yVIyp27Q",
"nips_2022__j8yVIyp27Q",
"nips_2022__j8yVIyp27Q"
] |
nips_2022_iuW96ssPQX | A Transformer-Based Object Detector with Coarse-Fine Crossing Representations | Transformer-based object detectors have shown competitive performance recently. Compared with convolutional neural networks limited by the relatively small receptive fields, the advantage of transformer for visual tasks is the capacity to perceive long-range dependencies among all image patches, while the deficiency is that the local fine-grained information is not fully excavated. In this paper, we introduce the Coarse-grained and Fine-grained crossing representations to build an efficient Detection Transformer (CFDT). Specifically, we propose a local-global cross fusion module to establish the connection between local fine-grained features and global coarse-grained features. Besides, we propose a coarse-fine aware neck which enables detection tokens to interact with both coarse-grained and fine-grained features. Furthermore, an efficient feature integration module is presented for fusing multi-scale representations from different stages. Experimental results on the COCO dataset demonstrate the effectiveness of the proposed method. For instance, our CFDT achieves 48.1 AP with 173G FLOPs, which possesses higher accuracy and less computation compared with the state-of-the-art transformer-based detector ViDT. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/CFDT. | Accept | This work proposes a new object detector architector that is based on a CNN stem, combined with a mostly transformer-based architecture, with the addition of a cross-fusion module that allows for reconciling coarse and high-grained features for more precise object detection.
Thee paper is well-written, novel and presents a significant gain over the state of the art, using reasonable amount of compute.
Object detection is a central area of interest and this method shows how to leverage the power of transformers to push the envelope in this domain. Therefore, I propose this paper be accepted at NeurIPS 2022. | val | [
"ww3o8sL5X02",
"6XktyXmW3hV",
"He2xMGP_qP",
"PvC4WhKLAs_",
"XkyZuSLzHwS",
"mOmpCDCIphm",
"JKWuGzF8bO",
"YWu3XhsudX",
"dBvZnX3qZTJ",
"-myShTq_alh",
"fkE1G6HwBb",
"f0bEHpM1fB",
"8WHAD28ddE2",
"Nroi5oia6bB",
"4UeUdflDamq",
"hLxWk4NBmnQ",
"8h4wlFcerMG"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your recognition of our manuscript.",
" Thanks for your reply, which solves my question. I keep my original score.\n\n",
" Thank you for the support. We will add the new results and analysis in the paper.",
" I appreciate the authors' response. Now, it's clear why the proposed architecture is faster than Swin-based models. The reason mainly comes from the small-scale feature maps in Pyramid-TNT. Although it reduces the AP on small-size objects, the paper's proposed components help improve it without compromising inference speed. The provided results are good enough to show the contribution of each component. Therefore, I raise my score and recommend putting the new results and analysis in the paper.\n",
" **We would like to express our sincere thanks for the constructive comments and suggestions from the reviewer.**\n\n\n\n**W1: Performance drop for small-size objects. According to the experiments, $AP_s$ drops from 30.4 to 28.1 compared to ViDT, although more fine-grained information was integrated. I can't understand why this happens. Intuitively, the proposed ideas of fusing more fine-grained information should improve $AP_s$.**\n\n**Q1: Could you explain why $AP_s$ drops compared with ViDT (Swin-small)?**\n\n**A-W1&Q1:** Thanks for the helpful comments. Since these two questions are about $AP_s$, it is appropriate to answer them together.\n\n**The difference of backbones (Swin vs. PyramidTNT)**. Compared with ViDT with the backbone of Swin-Small, the proposed FCDT with the backbone of PyramidTNT-Medium achieves higher $AP$, but lower $AP_ S$. The main factor is the relatively weak feature representations output by the first stage blocks. Feature maps with larger size are more suitable for detecting small objects. For a 2D image with a scale of $H \\times W $, with ResNet or Swin-Transformer as the backbone, the scale of feature maps in each stage is $\\frac{H}{4} \\times \\frac{W}{4} $, $\\frac{H}{8} \\times \\frac{W}{8} $, $\\frac{H}{16} \\times \\frac{W}{16} $, $\\frac{H}{32} \\times \\frac{W}{32} $. Feature maps of the last 3 stages are usually fed into Neck, and an additional feature map is generated by downsampling after the last stage. Therefore, taking ResNet or Swin-Transformer as the backbone, the scale of four output feature maps are $\\frac{H}{8} \\times \\frac{W}{8} $, $\\frac{H}{16} \\times \\frac{W}{16} $, $\\frac{H}{32} \\times \\frac{W}{32} $,$\\frac{H}{64} \\times \\frac{W}{64} $. These features are used in the Neck part to make cross-attention with det tokens. In PyramidTNT, the scale of feature maps in each stage is $\\frac{H}{8} \\times \\frac{W}{8} $, $\\frac{H}{16} \\times \\frac{W}{16} $, $\\frac{H}{32} \\times \\frac{W}{32} $,$\\frac{H}{64} \\times \\frac{W}{64} $. In object detection, the feature map scale has a great impact on detection performance, so we keep the outputs shape of our method consistent with Swin-Transformer or ResNet. Therefore, for the feature map with the same scale of $\\frac{H}{8} \\times \\frac{W}{8} $, the feature map obtained by Swin-Transformer is generated by 2 stages of Swin-Transformer blocks, while ours is only 1 stage of Transformer block. So the feature representations of scale with $\\frac{H}{8} \\times \\frac{W}{8} $ in our method are relatively weak. The detection performance of small objects for large backbone PyramidTNT-Medium is not as good as ViDT. In terms of PyramidTNT-Tiny and PyramidTNT-Small with relatively small complexity, we acquire higher $AP_ S$ than ViDT with the backbone of Swin-Transformer.\n\n**The performance gains of the proposed modules**. In fact, the proposed three components increased the $AP$, as well $AP_S$. As shown in Tab.5 of the original paper, we reveal the analysis of complete components. The below table shows the improvement for $AP_S$ under the backbone of PyramidTNT-Medium. It can be seen that compared with directly deploying PyramidTNT as the backbone, the introduction of Local-Global Cross Fusion (LGCF), Fine-Coarse Aware Neck (FCAN) and Efficient Multi-scale Feature Integration (EMFI) not only increases the $AP$, but also greatly improves the $AP_ S$ from 23.1 to 28.1. Therefore, our proposed methods are beneficial for the improvement of small object detection.\n\n| Backbone | LGCF | FCAN|EMFI | AP | APs |\n|-|-|-|-|-|-|\n| PyramidTNT_Medium| | | |44.3 | 23.1 |\n| PyramidTNT_Medium| ✓ | | |46.5 | 26.7 |\n| PyramidTNT_Medium| ✓ | ✓ | |47.2 | 27.0 |\n| PyramidTNT_Medium| ✓ | ✓ | ✓|48.1 | 28.1 |\n\n\n**Swin-Transformer takes into account the multi-scale effect.** With shifted windows, patches obtain features of different scales. This design has already achieved good detection performance. Besides, we introduce LGCF and FCAN to ViDT with the backbone of Swin-Nano, and the results are as follows:\n| Backbone | LGCF | FCAN | AP |APs |\n|-|-|-|-|-|\n| Swin_Nano | | | 40.4 | 23.2 |\n| Swin_Nano | ✓ | | 42.3 | 25.7 |\n| Swin_Nano | ✓ | ✓ | 42.7 | 25.9 |\n\nIt is apparent that the introduction of LGCF and FCAN not only increases the AP, but also the detection performance for small objects. Some specific details about how to immigrate them to Swin-Transformer are shown in **A-Q3**.",
" **W2: Incomplete Analysis on Backbones. In Section 4.3, there is a comparison between PyramidTNT and Swin-Transformer. I think this analysis is not complete to show that the improvement did not come from the replacement. First, ViDT is not designed to be with PyramidTNT. Second, only accuracy and AP measures are reported without 'FLOPS or FPS'. To sum up these, I think a more suitable comparison is combining Swin-Transformer with your three components. The performance trends with ViDT could not tell us their impact on your architecture.**\n\n**A-W2:** Thanks for the helpful comment. \n\nThe original intention of the comparison in Section 4.3 is to show that our FCDT achieves great performance, not because the backbone is just replaced by PyramidTNT, but to highlight the effectiveness of the three proposed components. If there is no such comparison, it might cause a misunderstanding that the main reason why we can achieve good results is utilizing PyramidTNT as backbone. In fact, directly replacing the backbone of ViDT with PyramidTNT cannot reach the detection performance that Swin-Transformer achieves. On the other hand, it can be seen from Tab.2 in Section 4.3 that the effect of image classification is not necessarily positively correlated with the performance of object detection. With such a comparison, we highlight that the better detection performance of FCDT mostly comes from the three modules that we propose.\n\nHowever, we realize that this is indeed an incomplete analysis of backbones because ViDT is not designed to be with PyramidTNT. Therefore, in order to prove the scalability of the proposed algorithm, we migrate the LGCF and FCAN to ViDT with the backbone of Swin-Transformer. Since the purpose of EMFI is to make up for the gap between feature extraction ability of PyramidTNT and Swin-Transformer in $\\frac{H}{8} \\times \\frac{W}{8} $ feature maps, it is not necessary to deploy EMFI to ViDT with the backbone of Swin-Transformer. We insert the local fine-grained information into Swin-Nano and introduce the LGCF and FCAN. Experimental results are as follows. From the table, it is apparent that our LGCF and FCAN greatly improve 2.4 AP. The trend of AP changes is consistent with FCDT. Some specific details are shown in **A-Q3**.\n| Backbone | LGCF | FCAN | AP | FLOPs(G) |\n|-|-|-|-|-|\n| Swin_Nano | | | 40.4 | 37 |\n| Swin_Nano | ✓ | | 42.3 | 43 |\n| Swin_Nano | ✓ | ✓ | 42.7 | 45 |\n\n\n\n**W3: Weak Novelty. The proposed approach mostly relies on existing works. For local- fine-grained information, PyramidTNT is adopted as the backbone, and the proposed three components are very simple and straightforward; this could be an advantage in practice, but academically looks like an incremental paper.**\n\n**A-W3:** Thanks for the helpful comment. \n\nThe advantage of transformer-based models is the capacity to perceive long-range dependencies among all image patches. Therefore, this is also the main reason why transformer-based detectors, such as DETR, YOLOS, ViDT, etc., can achieve great detection performance. However, these models ignore the spatial local information within each patch. There are useful fine-grained features inside each divided patch, which are rarely considered. Although the fine-grained inner representations have been proposed in TNT or PyramidTNT, the restricted receptive field of inner patches and the unidirectional \"Inner to Outer\" method limit the performance of inner representations. Besides, the previous \"Inner to Outer\" method is simply flattening the inner patches and adding to outer patches, which cannot fully exploit spatial information inside fine-grained features. Therefore, we propose LGCF for mutual cross fusion between global coarse-grained features and local fine-grained features. Besides, we proposes FCAN to make full use of fine-grained features, which let det tokens make cross-attention with both fine and coarse grained representations. Furthermore, we introduce EMFI to use the first stage outputs efficiently. \n\nThe main contribution of this paper is to introduce the Fine-grained and Coarse-grained crossing representations. We hope to provide a new idea to optimize the detection transformer, which is to capture both local fine-grained features and global coarse-grained features. In order to show that this is feasible, we introduce FCAN and LGCF to ViDT with the backbone of Swin-Transformer. In future research, we hope that the proposed coarse-fine grained crossing strategies can inspire more Transformer-based models. ",
" **Q2: I feel hard to know what components make the architecture faster than others. I understand your design is simple and effective but there is no explanation why it is faster than another efficient mechanism (e.g., local attention in Swin - ViDT).**\n\n**A-Q2:** Thanks for the helpful comment. \n\nFirst of all, compared with ResNet and Swin-Transformer, PyramidTNT is a relatively lightweight model. In the upper **A-W1&Q1**, we illustrate that the scale of feature maps output in the four stages is smaller than that of Swin-Transformer. It is also such a characteristic that the number of patches in each stage of PyramidTNT is 1/4 of that of Swin-Transformer, so its calculation amount is greatly reduced compared with Swin-Transformer. In addition, the number and feature dimension of transformer blocks are also smaller than that of Swin-Transformer. The FLOPs of ViDT with the backbone of Swin-Nano is 37G. After we replace it with PyramidTNT-Tiny, the FLOPs are reduced to 28G. The FLOPs of ViDT with the backbone of Swin-Tiny is 114G. After we replace it with PyramidTNT-Small, the FLOPs are reduced to 65G. \n\nThen, our designed components are lightweight. LGCF accounts for the most computation in the proposed modules. For the backbone, since we only introduce LGCF at the end of each stage, there are only four LGCF modules added to the backbone. The detailed changes of FLOPs are shown in the table below. With the backbone of PyramidTNT-Tiny, the FLOPs increase of the proposed three components are 3.7G, 1.5G, and 0.2G, respectively.\n\n| Backbone | LGCF | FCAN|EMFI | AP | FLOPs(G) |\n|-|-|-|-|-|-|\n| PyramidTNT_Tiny| | | |40.8 | 27.8 |\n| PyramidTNT_Tiny| ✓ | | |42.2 | 31.5 |\n| PyramidTNT_Tiny| ✓ | ✓ | |42.6 | 33.0 |\n| PyramidTNT_Tiny| ✓ | ✓ | ✓|43.0 | 33.2 |\n\nWith the above two factors, our proposed FCDT is efficient compared with ViDT. Because our proposed modules have brought obvious $AP$ improvement with less calculation, this is an advantage of practical application.\n\n**Q3: Your architecture seems to be coupled with PyramidTNT. This could be a limitation (low extendability). Do you think your three components can be combined with other backbones as well? If possible, doing and comparing them would be good to prove that the performance improvement is not from the backbone.**\n\n**A-Q3:** Thanks for the helpful comment. \n\nThe proposed components can be combined with other backbones as well. As the answer **A-W2**, we insert the inner patches into Swin-Transformer, and introduce the LGCF to perform cross fusion between coarse-grained and fine-grained features. We also utilize FCAN to let det tokens interact with both types of representations. Since the purpose of EMFI is to make up for the gap between feature extraction ability of PyramidTNT and Swin-Transformer in $\\frac{H}{8} \\times \\frac{W}{8} $ feature maps, it is not necessary to deploy EMFI to ViDT with the backbone of Swin-Transformer. \n\nIn Swin-Transformer, the image patches are generated by the \"PatchEmbed\" operation. We regard the obtained image patches as outer patches and generate inner patches from the input image. We keep outer patches as before to extract features through Swin-Transformer blocks. For inner patches, we extract local fine-grained features through a new independent basic Transformer block(including Multi-Head Attention and MLP, etc) in each stage. At the end of each stage, we perform the mutual cross fusion between global coarse-grained features and local fine-grained features through LGCF. \n\nCompared with the original model, we add a basic transformer block and a LGCF module in each stage. Besides, we also use FCAN to let det tokens make cross-attention with both types of representations. We utilize Swin-Nano as the base backbone, and the experimental results are as follows:\n\n| Backbone | LGCF | FCAN | AP | FLOPs(G) |\n|-|-|-|-|-|\n| Swin_Nano | | | 40.4 | 37 |\n| Swin_Nano | ✓ | | 42.3 | 43 |\n| Swin_Nano | ✓ | ✓ | 42.7 | 45 |\n\n\nCompared with the baseline of 40.4 $AP$, the introduction of LGCF increases to 42.3 $AP$. With FCAN, the $AP$ further increases to 42.7. This performance proves that our proposed components can be extended to another backbone. We show that our method is feasible for general Transformer-based detector to capture fine-coarse crossing representations.\n\n\n\n**Again, thank you very much for the kind efforts in evaluating and helping to improve the quality of our manuscript!**",
" **We would like to express our sincere thanks for the constructive comments and suggestions from the reviewer.**\n\n\n\n**W1: The whole pipeline seems to be fine, but lacks deep insight. Most of the operations are feature aggregation between different levels, which has been deeply explored in object detection.**\n\n**A-W1:** Thanks for the helpful comment. \n\nCompared with CNN, the reason why Transformer can achieve great performance in visual tasks is the typical strategy to perform global long-range attention on the divided image patches. Therefore, this is also the main reason why transformer-based detectors, such as DETR, YOLOS, ViDT, etc., achieve great detection performance. \n\nHowever, these models ignore the spatial local information inside each patch. There are useful fine-grained features inside each divided patch, which are rarely considered. The main contribution of this paper is to introduce the Fine-grained and Coarse-grained crossing representations, and propose a cross-fusion method to capture both local and global information in the meanwhile. Through the proposed Local-Global Cross Fusion (LGCF), our model achieves a significant improvement in accuracy. Besides, we put forward Fine-Aware Neck (FCAN) to make full use of fine-grained features, which let det tokens make cross-attention with both fine and coarse grained representations. Furthermore, we put forward Efficient Multi-scale Feature Integration (EMFI) to use the first stage feature map efficiently. The other transformer-based detectors, such as DETR, ViDT, etc., pursue better global information but do not consider the local fine-grained features. Through fine-coarse crossing representations, our proposed FCDT possesses high accuracy and less computation. Therefore, it is promising to introduce local fine-grained representations and explore better crossing methods between coarse and fine grained features to enhance the detection performance of Transformer-based detectors. \n\n**W2: Some of the statements in this paper are confusing. For example, how to understand “combine the MSDA(...) to Qdet and interact with Fo” in Line192.**\n\n**A-W2:** Thanks for the helpful comment. The original word “combine” is inappropriate. This statement in Line 192 should change to “After the deformable cross attention of det tokens and inner patches, we add $MSDA(Q\\_{det},\\mathcal{F}\\_{I})$ to $Q\\_{det}$, and then make further interact with $\\mathcal{F}\\_{O}$”. We have reexamined the whole paper again and found some unclear descriptions. We will rewrite those sentences to make them not confusing. Some of them are modified as follows:\n\nIn Line27, the original description is \"One is to **replace the backbone with transformer variants in CNN-based object detectors**\", the changed version is \"One is to **replace the CNN-based backbones with transformer variants in object detectors**\".\n\nIn Line68, the original description is \"However, **these models ignore the spatial local information within each patch. There are more useful fine-grained features inside each divided patch, which are rarely considered**\", the changed version is \"However, **the rich spatial information inside these divided patches is rarely considered by previous models**\".\n\nIn Line97, the original description is \"Han et al. propose Transformer iN Transformer (TNT) that **not only constructs** the global connection among outer patches, but also **the inner attention mechanism of each patch**\", the changed version is \"Han et al. propose Transformer iN Transformer (TNT) that **constructs not only** the global connection among outer patches, but also **the inner communication inside each patch**\".\n\nIn Line163, the original description is \"Although the outer patches acquire fine-grained inner representations with original PyramidTNT **block**, the fusion method of simple flattening and addition ignores spatial information\", the changed version is \"Although the outer patches acquire fine-grained inner representations with original PyramidTNT **blocks**, the fusion method of simple flatten and add ignores spatial information\".",
" \n**W3: Writing errors. For example, “the lightweight bottom-up feature integration algorithm Efficient Multi-scale Feature Integration” should be “the lightweight bottom-up feature integration algorithm, i.e., Efficient Multi-scale Feature Integration” in Line138. The authors are advised to double-check for similar errors.**\n\n**A-W3:** Thanks for the helpful comment. It is indeed a writing error of original sentence “we introduce the lightweight bottom-up feature integration algorithm Efficient Multi-scale Feature Integration module.” We have changed it to “ we introduce the lightweight bottom-up feature integration algorithm, i.e., Efficient Multi-scale Feature Integration”. We have also checked the whole paper again, mainly to correct the typos and grammar errors.\n\nSome of these errors are modified as follows:\n\nIn Line18, the original description is \"**In the past decade, models based on convolutional neural networks (CNNs) used to be the mainstream architecture for object detection tasks.**\", the changed version is \"**The former mainstream architectures for object detection are mostly based on convolutional neural networks (CNNs).**\"\n\nIn Line25, the original description is \"can be divided into three **parts**: backbone, neck and head\", the changed version is \"can be divided into three **components**: backbone, neck, and head.\"\n\nIn Line26, the original description is \"With the development of transformer **used for** vision tasks\", the changed version is \"With the development of transformer **in** vision tasks\".\n\nIn Line38, the original description is \"In DETR, ResNet is used as backbone for extracting features and transformer is proposed to integrate the relations between learnable object queries and extracted image features\", the changed version is \"In DETR, ResNet is used as **the backbone** for extracting features and transformer is proposed to integrate the relations between learnable object queries and **intermediate** image features\".\n\nIn Line46, the original description is \"The other is the slow **convergence**\", the changed version is \"The other is the slow **convergence speed**\".\n\nIn Line61, the original description is \"it incorporates an encoder-free neck structure to **boost** the detection performance without **much increase in computational load**\", the changed version is \"it incorporates an encoder-free neck structure to **further** **boost** the detection performance without **introducing too much computational burden**\".\n\nIn Line70, the original description is \"**Considering the general object detection task**\", the changed version is \"**Take the general object detection benchmark as an example**\".\n\nIn Line91, the original description is \"In this section, we briefly revisit the fine-grained representations in transformer and **detection transformer frameworks**\", the changed version is \"In this section, we briefly revisit the fine-grained representations in transformer and **transformer-based detection frameworks**\".\n\n\n**Q1: Why are features of outer and features of inner different sizes in Line148?**\n\n**A-Q1:** For a 2D image with the shape of $H\\times W \\times C$, we divide it into $\\frac{H}{4} \\times \\frac{W}{4} $ outer patches. The shape of each outer patch is $4\\times 4 \\times C$, so the length of each outer patch in the process is $16C$. At this time, the shape of outer patches can be written as [$\\frac{H}{4} \\times \\frac{W}{4} $,$16C$]. In addition, each outer patch is also composed of $4\\times 4 $ pixels. We regard its internal pixels as inner patches, and the length of each inner patch is $C$. Each inner patch only performs attention with other 15 inner patches in a fixed region. At this time, the shape of inner patches can be written as [$H\\times W$, $C$]. The inner patches and outer patches can be considered as two feature maps with different scales and channel numbers. Therefore, for the $l$ stage feature maps, we use $\\mathcal{F}\\_{O}^{l} \\in \\mathbb{R} ^{\\frac{H}{2^{l+2}}\\times \\frac{W}{2^{l+2}}\\times C_{l}}$ and $\\mathcal{F}\\_{I}^{l} \\in \\mathbb{R} ^{\\frac{H}{2^{l}}\\times \\frac{W}{2^{l}}\\times \\frac{C_{l}}{16}}$ to represent outer coarse-grained patches and inner fine-grained patches, respectively.",
" \n\n**Q2: What happens when det tokens interacts with coarse-grained outer patches first and then with fine-grained inner patches in section3.2?**\n\n**Q3: Based on Eq. 8, my understanding is that Qdet and MSDA(Qdet, Fi) have a shortcut, but why Qdet and MSDA(Qdet, Fo) do not have shortcut?**\n\n**A-Q2&A-Q3:** Thanks for the helpful comments. Since these two questions are about the Neck, it is appropriate to answer them together.\n\nThe main advantage of Transformer is the long-range global attention mechanism between each patch. And the internal information can be used as auxiliary features to improve fitting ability. Because each inner patch only performs attention with other 15 inner patches in a fixed region, we only need to use fewer transformer blocks to extract local fine-grained features compared with complex global representations. Taking PyramidTNT-Medium as an example, the number of transformer blocks used for global features in each stage is 2,8,6,2 respectively, while the number used for extracting local representations is 1,2,1,1. And the introduction of local representations does not increase too much computation burden. Therefore, the main representations are the global features, while the local fine-grained features are auxiliary. From the experimental results, the introduction of local fine-grained features is very effective for object detection.\n\nFor Q2, with PyramidTNT-Tiny as the backbone, if we let det tokens interact with coarse-grained features first and then with fine-grained features, the AP decreases from 42.2 to 41.7. Oppositely, our proposed method with FCAN significantly improves the AP to 42.6. The reason is as follows. Transformer-based detectors such as YoloS and ViDT only consider global coarse-grained features, so their det tokens directly interact with outer patches. After making cross-attention with extracted features in the Neck part, the prediction results are acquired by putting the det tokens into prediction head (MLP). So the outputs of the Neck that can be directly connected with the prediction head are more important. Local fine-grained features are auxiliary to helps global coarse-grained features obtain local information, but they are not mainstream features. So we let det tokens interact with fine-grained inner patches first and then with coarse-grained outer patches. \n\nFor Q3, the role of Neck is to let det tokens make cross-attention with feature maps. We utilize the Multi-Scale Deformable Attention (MSDA) to complete this cross-attention. In previous detectors, i.e., ViDT or Deformable-DETR, the output is $MSDA(Q_{det},\\mathcal{F})$ and there is no shortcut from $Q_{det}$. In our method, the previous trial was to perform $MSDA(Q_{det},\\mathcal{F}\\_{I})$ first, then is another MSDA between $MSDA(Q_{det},\\mathcal{F}\\_{I})$ and $\\mathcal{F}\\_{O}$. However, the AP slightly decreased from 42.2 to 42.0 in this way. With the introduction of a shortcut to the MSDA between $Q_{det}$ and $\\mathcal{F}\\_{I}$, AP is significantly improved to 42.6. The reason is as follows. Det tokens share the same weights with outer patches in the backbone. For the fourth stage in the backbone, det tokens also perform cross-attention with outer patches. Hence, compared with inner patches, there is a close relation between det tokens and outer patches. In terms of local fine-grained features as an auxiliary role, the interaction between det tokens and local features is also an assistant to increase the perception of det tokens. Therefore, the introduction of a shortcut is to make identity mapping to ensure that original det tokens can still fully interact with outer patches in the Neck. Since det tokens have been closely related to outer patches in the backbone, we directly perform MSDA between det tokens and global outer patches, as ViDT does. \n\n\n**Again, thank you very much for the kind efforts in evaluating and helping to improve the quality of our manuscript!**",
" **We would like to express our sincere thanks for the constructive comments and suggestions from the reviewer.**\n\n\n\n**W1: The writing needs to be improved slightly.**\n\n**A-W1:** Thanks for the helpful comment. We have checked the whole paper again, mainly to correct the typos and grammar errors. Some of these errors are modified as follows:\n\nIn Line18, the original description is \"**In the past decade, models based on convolutional neural networks (CNNs) used to be the mainstream architecture for object detection tasks.**\", the changed version is \"**The former mainstream architectures for object detection are mostly based on convolutional neural networks (CNNs).**\"\n\nIn Line25, the original description is \"can be divided into three **parts**: backbone, neck and head\", the changed version is \"can be divided into three **components**: backbone, neck, and head.\"\n\nIn Line26, the original description is \"With the development of transformer **used for** vision tasks\", the changed version is \"With the development of transformer **in** vision tasks\".\n\nIn Line27, the original description is \"One is to **replace the backbone with transformer variants in CNN-based object detectors**\", the changed version is \"One is to **replace the CNN-based backbones with transformer variants in object detectors**\".\n\nIn Line38, the original description is \"In DETR, ResNet is used as backbone for extracting features and transformer is proposed to integrate the relations between learnable object queries and extracted image features\", the changed version is \"In DETR, ResNet \\cite{he2016deep} is used as **the backbone** for extracting features and transformer is proposed to integrate the relations between learnable object queries and **intermediate** image features\".\n\nIn Line46, the original description is \"The other is the slow **convergence**\", the changed version is \"The other is the slow **convergence speed**\".\n\nIn Line61, the original description is \"it incorporates an encoder-free neck structure to **boost** the detection performance without **much increase in computational load**\", the changed version is \"it incorporates an encoder-free neck structure to **further** **boost** the detection performance without **introducing too much computational burden**\".\n\nIn Line70, the original description is \"**Considering the general object detection task**\", the changed version is \"**Take the general object detection benchmark as an example**\".\n\nIn Line91, the original description is \"In this section, we briefly revisit the fine-grained representations in transformer and **detection transformer frameworks**\", the changed version is \"In this section, we briefly revisit the fine-grained representations in transformer and **transformer-based detection frameworks**\".\n\nIn Line137, the original description is \"we introduce the lightweight bottom-up feature integration algorithm **Efficient Multi-scale Feature Integration module**\", the changed version is \"we introduce the lightweight bottom-up feature integration algorithm, **i.e., Efficient Multi-scale Feature Integration**\".\n\n",
" **W2: The results in Tab.1 are impressive, yet the authors are advised to make more discussions on the results to make the paper stronger.**\n\n**A-W2:** Thanks for the helpful comment. The previous discussion about the results in Tab.1 is a little insufficient. We have made a more detailed illustration for this part, as shown below:\n\n We conduct experiments under various computational constraints to demonstrate the effectiveness of our proposed FCDT. Specifically, Tab. 1 shows the comparison of FCDT with other state-of-the-art Transformer-based detectors, including DETR[11], SMCA[52], UP DETR[49], Efficient DETR[13], Conditional DETR[48], DAB DETR[27], DN DETR[50], SAM DETR[51], YOLOS[47], and ViDT[14] on COCO benchmark. The corresponding results elaborate the great potential in accuracy-computation trade-off of our proposed FCDT.\n\n**Compare with tiny detectors.** YOLOS[47] is a canonical ViT architecture for object detection. Although it has a small computational cost, the neck-free design withholds the YOLOS from obtaining high performance, our FCDT achieves +12.7 AP compared to the Deit-tiny based YOLOS. When compared to the recently proposed lightweight ViDT[14], our FCDT outperforms it by +2.6 AP with fewer FLOPs constraints. More specifically, the backbones of ViDT and FCDT attain similar results on ImageNet (74.9 of Swin-Nano v.s. 75.2 of P-Tiny), and the superiority in COCO further demonstrates the improvements brought by our proposed LGCF, FCAN, and EMFI.\n\n**Compare with small detectors.** We further compare our P-small based FCDT with Swin-Tiny based ViDT and several variants of ResNet-50 based DETR. For example, DN DETR[50] accelerates DETR training by introducing query denoising. Our FCDT outperforms it by +1.7 AP with far less computational cost (-17G FLOPs), and we still exceed the ViDT by +1.0 AP, and the amount of calculation is significantly reduced by 37G FLOPs.\n\n**Compare with medium detectors.** For the backbone with P-Medium, our FCDT achieves 48.1 AP with 173G FLOPs. In terms of AP, the detectors close to our method are DN DETR with DC5-ResNet-101 and ViDT with Swin-Small. Compared with them, the computational complexity of our method is lower than these detectors by 109G FLOPs and 35G FLOPs respectively due to our efficient detection framework. Besides, our method still reaches a better detection performance than theirs.\n\nIn addition, when compared to those detectors with ResNet as the backbone, fully transformer-based models like ViDT and FCDT show better trade-off between accuracy and computational cost (higher AP and fewer FLOPs). This also reveals that fully transformer-based frameworks possess great potential for efficient object detection. And our proposed FCDT can obtain the best trade-off among other detectors.",
" \n**Q1: Can the authors further explain the calculation process of Eq. (9)?**\n\n**A-Q1:** Thanks for the helpful comment. Eq. (7) and Eq. (9) are the Multi-Scale Deformable Attention between det tokens and global and local features respectively. Eq. (9) is the description of cross attention between det tokens and global features. \n\nThe basic Multi-Head Attention in Transformer is as shown below:\n\n\n$$\nMultiHeadAttn(z_q,x_k)=\\sum_{m=1}^{M} W_m[\\sum_{k\\in \\Omega_k}^{} A_{mqk}\\cdot W_{m}^{'} x_k]\n$$\nIn the upper equation, $q \\in \\Omega_q$ indexes a query element with representation feature $z_q \\in \\mathbb{R}^C$, where $C$ is the feature dimension, $\\Omega_q $ and $\\Omega_k$ specify the set of query and key elements, respectively, $m$ indexes the attention head, $W_{m}^{'}\\in \\mathbb{R}^{C_v \\times C}$ and $W_{m}\\in \\mathbb{R}^{C \\times C_v}$ are of learnable weights ($C_v = C/M$ by default). The attention weights $A_{mqk} \\propto exp{\\frac{z_{q}^{T} U_{m}^{T} V_m x_k}{\\sqrt{C_v} } }$ are normalized as $ {\\textstyle \\sum_{k \\in \\Omega_k }A_{mqk}=1} $, in which $U_m, V_m \\in \\mathbb{R}^{C_v \\times C}$ are also learnable weights.\n\nIn Multi-Head Attention of Transformer, $k \\in \\Omega_k$ means that each query element must connect to all key elements, which causes a lot of computational burdens and makes the model difficult to train. In Multi-Scale Deformable Attention, each det token only needs to aggregate a small $K_O$ (usually set to 8) set of key contents sampled from the multi-scale feature maps $\\mathcal{F}\\_{O}^{l}$. The sampled values are automatically determined by the model in the training process. The equation is as follows:\n$$\nMSDA(Q_{det},\\mathcal{F}\\_{O}^{l})=\\sum_{m=1}^{M_O} W_m \\left [ \\sum_{l=1}^{L} \\sum_{k=1}^{K_O} \nA_{mlk}\\cdot W_{m}^{'} \\mathcal{F}\\_{O}^{l}(\\phi\\_l(p)+\\Delta p_{mlk}) \\right ]\n$$\nwhere $Q\\_{det}$ is the query element of one det token. $\\phi_l(p)$ is the reference point of the det token re-scaled for the $l$-th level feature map, while $\\Delta p\\_{mlk}$ is the sampling offset for deformable attention. $\\mathcal{F}\\_{O}^{l}(\\phi\\_l(p)+\\Delta p\\_{mlk})$ represents the sampling values of the $l$-th feature map. In another word, MSDA only lets each det token interact with $K_O$ (usually set to 8) instead of all key elements of each feature map. This method greatly reduces the amount of calculation. MSDA also considers a multi-scale method that makes cross-attention with feature maps of different stages.",
" **Q2: Where does the proposed efficient multi-scale feature integration demonstrate efficiency compared to other FPNs?**\n\n**Q3: In which stage is the proposed multiscale feature integration used?**\n\n**A-Q2&A-Q3:** Thanks for the helpful comments. Since Q2 and Q3 are about Efficient Multi-scale Feature Integration (EMFI) , it is appropriate to answer these two questions together. \n\nCompared with other FPNs equipped with a large number of 3x3 convolution layers, our EMFI only consists of upsampling and 1x1 convolution, so it is relatively lightweight and brings little computational burden.\n\nEMFI is used in the backbone part. For backbones, such as ResNet, Swin-Transformer and PyramidTNT, we divide them into four stages, and each stage outputs feature maps with different scales. For a 2D image with scale of $H \\times W $, with ResNet or Swin-Transformer as the backbone, the scale of feature maps in each stage is $\\frac{H}{4} \\times \\frac{W}{4} $, $\\frac{H}{8} \\times \\frac{W}{8} $, $\\frac{H}{16} \\times \\frac{W}{16} $, $\\frac{H}{32} \\times \\frac{W}{32} $. The feature map output by the first stage is ignored to put into Neck due to its insufficient feature extraction ability, and an additional feature map is generated by downsampling after the last stage. Therefore, taking ResNet or Swin-Transformer as the backbone, the scale of four output feature maps are $\\frac{H}{8} \\times \\frac{W}{8} $, $\\frac{H}{16} \\times \\frac{W}{16} $, $\\frac{H}{32} \\times \\frac{W}{32} $,$\\frac{H}{64} \\times \\frac{W}{64} $. These features are used in the Neck part to make cross-attention with det tokens. In our method, the scale of feature maps in each stage is $\\frac{H}{8} \\times \\frac{W}{8} $, $\\frac{H}{16} \\times \\frac{W}{16} $, $\\frac{H}{32} \\times \\frac{W}{32} $,$\\frac{H}{64} \\times \\frac{W}{64} $. In object detection, the feature map scale has a great impact on detection performance, so we keep the outputs shape of our method consistent with Swin-Transformer or ResNet. Therefore, for the feature map with same scale of $\\frac{H}{8} \\times \\frac{W}{8} $, the feature map obtained by Swin-Transformer is generated by 2 stages of Swin-Transformer blocks, while ours is only 1 stage of Transformer block. Compared with Swin-Transformer or ResNet, the feature representations of scale with $\\frac{H}{8} \\times \\frac{W}{8} $ in PyramidTNT are relatively weak. \n\nAlthough the feature extraction capability of the first stage is limited, we still cannot ignore the output features of this stage. So we borrow the idea of FPN and the role of EMFI is to improve the feature representations of the first stage by introducing high-level features into low-level features. For other types of backbones, such as Swin-Transformer, EMFI is not necessary. We hope that this module brings as little computation as possible. Therefore, compared with other FPNs equipped with a large number of 3x3 convolution layers, our module only consists of upsampling and 1x1 convolution, which is relatively lightweight. Experimental results also prove that the AP is significantly improved by introducing EMFI with little increase in the amount of calculation. \n\n\n**Again, thank you very much for the kind efforts in evaluating and helping to improve the quality of our manuscript!**\n\n",
" A fully transformer-based object detector (transformer-based backbone and neck) is an interesting topic. This paper proposes the method of the Fine-grained and Coarse-grained crossing representations for building efficient transformer-based object detectors. In the backbone, this paper maintains both the fine-grained and coarse-grained features and introduce a lightweight local-global cross fusion module. In the neck, the proposed module allows the detection tokens to make attention-based interaction with fine-grained representations firstly, and then perform further interaction with coarse-grained representations. The results on the COCO dataset demonstrate the effectiveness of the proposed method. Strengths\n1. This paper is technically sound and easy to understand.\n\n2. The proposed method is interesting and has a clear motivation. Other transformer-based detectors do not make good use of the fine-grained information, resulting in unsatisfactory performance. The proposed local-global cross fusion module, fine-coarse aware neck, and efficient multi-scale feature integration are novel.\n\n3. The experiments are extensive and ablation studies are comprehensive to understand the proposed method.\n\nWeaknesses\n\n1. The writing needs to be improved slightly.\n\n2. The results in Tab.1 are impressive, yet the authors are advised to make more discussions on the results to make the paper stronger.\n 1. Can the authors further explain the calculation process of Eq. (9)?\n\n2. Where does the proposed efficient multi-scale feature integration demonstrate efficiency compared to other FPNs?\n\n3. In which stage is the proposed multiscale feature integration used?\n Yes, the authors have discussed the limitations and potential negative social impact of their work in the Appendix.",
" This paper introduces the Fine-grained and Coarse-grained crossing representations for building a Detection Transformer by using a local-global cross fusion module and Fine-Coarse Aware Neck. Strengths:\n+ The paper is well organized.\n+ The performance boost seems good with a slight increase in Flops.\n\nWeaknesses:\n- The whole pipeline seems to be fine, but lacks deep insight. Most of the operations are feature aggregation between different levels, which has been deeply explored in object detection.\n- Some of the statements in this paper are confusing. For example, how to understand “combine the MSDA(...) to Qdet and interact with Fo” in Line192.\n- Writing errors. For example, “the lightweight bottom-up feature integration algorithm Efficient Multi-scale Feature Integration” should be “the lightweight bottom-up feature integration algorithm, i.e., Efficient Multi-scale Feature Integration” in Line138. The authors are advised to double-check for similar errors.\n\nOverall, the simple and lightweight structure brings a huge performance boost is the reason I agree to accept this article.\n - Why are features of outer and features of inner different sizes in Line148?\n- What happens when det tokens interacts with coarse-grained outer patches first and then with fine-grained inner patches in section3.2?\n- Based on Eq. 8, my understanding is that Qdet and MSDA(Qdet, Fi) have a shortcut, but why Qdet and MSDA(Qdet, Fo) do not have shortcut?\n The authors have discussed the limitations and potential negative societal impact of their work in A.4.",
" This paper studies the topic of 'Fully Transformer-based Object Detector' (except the stem layer, do not use CNN layers for operations). Existing Transformers mainly rely on global interaction between outer patches, but this work proposes a way of integrating attention of inner patches, capturing local & fine-grained information; (1) LGCF: mixing fine & coarse patches with interacting them, (2) FCAN: alternating cross-attention btw <det x fine-grained patch> and <det x coarse-grained patches>, and (3) EMFI: using the first block patches efficiently. The proposed architecture shows a better AP and Speed trade-off compared with other compared detectors. To support their justification, the authors provide some ablation studies on the proposed three components. There are both strengths and weaknesses I found while reading the paper.\n\n### Strength\n1) Integrating the idea of 'local + global information' into Transformer-based object detection is reasonable because improving AP on varying-size objects (especially, small-size) is a major challenge in this domain. \n2) The proposed three components (blocks) look simple and effective when combined. Therefore, they did not harm the inference speed of the detector.\n3) Experimental results are not promising but reasonable improvements.\n\n### Weakness\n1) ***Performance drop for small-size objects***. According to the experiments, $AP_s$ drops from 30.4 to 28.1 compared to ViDT, although more fine-grained information was integrated. I can't understand why this happens. Intuitively, the proposed ideas of fusing more fine-grained information should improve $AP_s$.\n2) ***Incomplete Analysis on Backbones***. In Section 4.3, there is a comparison between PyramidTNT and Swin-Transformer. I think this analysis is not complete to show that the improvement did not come from the replacement. First, ViDT is not designed to be with PyramidTNT. Second, only accuracy and AP measures are reported without 'FLOPS or FPS'. To sum up these, I think a more suitable comparison is combining Swin-Transformer with your three components. The performance trends with ViDT could not tell us their impact on your architecture.\n3) ***Weak Novelty***. The proposed approach mostly relies on existing works. For local- fine-grained information, PyramidTNT is adopted as the backbone, and the proposed three components are very simple and straightforward; this could be an advantage in practice, but academically looks like an incremental paper.\n 1) Could you explain why $AP_s$ drops compared with ViDT (Swin-small)?\n2) I feel hard to know what components make the architecture faster than others. I understand your design is simple and effective but there is no explanation why it is faster than another efficient mechanism (e.g., local attention in Swin - ViDT).\n3) Your architecture seems to be coupled with PyramidTNT. This could be a limitation (low extendability). Do you think your three components can be combined with other backbones as well? If possible, doing and comparing them would be good to prove that the performance improvement is not from the backbone.\n\n Refer to the strengths and weaknesses above."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"6XktyXmW3hV",
"-myShTq_alh",
"PvC4WhKLAs_",
"JKWuGzF8bO",
"8h4wlFcerMG",
"8h4wlFcerMG",
"8h4wlFcerMG",
"hLxWk4NBmnQ",
"hLxWk4NBmnQ",
"hLxWk4NBmnQ",
"4UeUdflDamq",
"4UeUdflDamq",
"4UeUdflDamq",
"4UeUdflDamq",
"nips_2022_iuW96ssPQX",
"nips_2022_iuW96ssPQX",
"nips_2022_iuW96ssPQX"
] |
nips_2022_0zlLhfG6rxI | Bessel Equivariant Networks for Inversion of Transmission Effects in Multi-Mode Optical Fibres | We develop a new type of model for solving the task of inverting the transmission effects of multi-mode optical fibres through the construction of an $\mathrm{SO}^{+}(2,1)$-equivariant neural network. This model takes advantage of the of the azimuthal correlations known to exist in fibre speckle patterns and naturally accounts for the difference in spatial arrangement between input and speckle patterns. In addition, we use a second post-processing network to remove circular artifacts, fill gaps, and sharpen the images, which is required due to the nature of optical fibre transmission. This two stage approach allows for the inspection of the predicted images produced by the more robust physically motivated equivariant model, which could be useful in a safety-critical application, or by the output of both models, which produces high quality images. Further, this model can scale to previously unachievable resolutions of imaging with multi-mode optical fibres and is demonstrated on $256 \times 256$ pixel images. This is a result of improving the trainable parameter requirement from $\mathcal{O}(N^4)$ to $\mathcal{O}(m)$, where $N$ is pixel size and $m$ is number of fibre modes. Finally, this model generalises to new images, outside of the set of training data classes, better than previous models. | Accept | This paper proposes a new learning-based technique for imaging through multimodal fibers (MMF). A key idea of this work is to exploit the property that the transmission matrices associated with MMFs are approximately diagonalizable by Bessel basis. This idea then allows one to significantly reduce the number of parameters, which in turn reduces the memory usage and computational burden of the inverse problem of recovering a natural image from speckle patterns. The authors demonstrated the effectiveness of their proposed algorithm on low-resolution experimentally captured data as well as on higher-resolution simulated dataset.
| train | [
"DYPHYwRMLX0",
"0-QgR8yA1Wt",
"qTxaO861ByR",
"euccr3gIuJ8",
"msIn9z1u8BK",
"OC0aM2YVtXe",
"Ki9s_sWyLW",
"ADTEPW7XCam",
"4Jem9hOzTJu",
"P1KOGy_Iy8J",
"HzDeNvC4t_h"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response! We are happy those sections addressed your most serious concerns.\n\nAlso, thank you for raising the point on cells. We were hoping to give an application where requiring a smaller training dataset would be a benefit, if a future system were to be re-calibrated in situ (with a specific bend). On reflection, we have decided to remove this point, as further details (about, e.g. a significantly more complicated VAE architecture using our BEM method at its core) would be required to avoid the confusion, complicating the message of the paper. As none of the main claims made in the paper depend on this point, we now think it will be cleaner to remove it. Thank you for helping us refine our core message!\n\nWe now better understand your concern around the fMNIST dataset. Your concern is very valid for previous complex linear and CNN based approaches, and we believe this has been an issue in some of the previous published work. However, our approach of using an efficient physics motivated model is designed to avoid overtraining, by having significantly fewer trainable parameters in the core BEP model. We then added the post processing model to tidy and sharpen the images from the output of this model. The final sentence of Sec 3.2 explicitly states this danger of bias to the prior, and the need to design interfaces for safety-critical applications around this issue.\n\nIn our table of results and figures (Fig 3, 4, 5 and 6) we always included both the output of the physics motivated model (BEM) and then that model combined with the post processing model (BEM+PP) so that the contribution of both of these can be seen. We also show that our model trained on fMNIST generalises well to MNIST in Figure 5, which highlights that our physics motivated model has not over-fit only to fMNIST and we provide the results on the higher resolution Imagenet images in section 4.2.2. Finally, the new section we added A.15 shows how performance changes as some bases are removed from the model. We acknowledge your point though about the dangers of using particularly low resolution images, although we felt the need to use these to compare to previous published SOTA methods, and we can add in the paper that while our high-resolution simulated results are very promising, future work should utilise higher resolution images in lab experiments for further validation.\n",
" I thank the authors for their answers, my points are addressed. ",
" Thank you for your very detailed response and for pointing me to A4 and A8; these sections have addressed the most serious of my concerns. \n\nThere was a typo in my review: \"cells are involved in the calibration of a TM\" should have read \"cells are not involved in the calibration of a TM.\" To my knowledge, TM estimation is generally performedas a calibration/training step before imaging and is not done in vivo. Thus the claim that the proposed method would reduce cell bleaching by accelerating the training process seems unlikely.\n\nMy concern about (F)MNIST being toy datasets is that these are very low dimensional datasets (far less than R^{28x28}). Accordingly NN backend may easily learn to reconstruct these images even if Bessel functions are a poor representation for the TM",
" * There's no evidence the proposed method generalizes across MMFs.\n* Finally, to address the limitation concerns: Not discussed. Method is specific to a particular MMF. Moreover, if that fiber bends (like it would in endoscopy) that fibers TM is likely to change.\n\n\nThe task of generalizing across MMFs is not the task we attempted to solve within the paper. As each MMF has a different transmission matrix, there is no simple solution to this, and we make no claims to have solved that problem. We agree that bending in fibers is an unsolved problem, although it is not the problem attempted in this paper. Instead we developed a physics-informed model that reduces the parameter count required to be learned which we show can scale to higher-resolution images than previous methods and due to the massive reduction in parameters can learn with fewer training data points than previous methods.\n\nIn the case of generalizing to new TMs which are the same fiber but in a different configuration, such as after bending, we believe that our proposed architecture will have benefits in addressing this due to fiber bending having previously been shown to amount to mode mixing within the TM (Resisi et al, 2020), so TMs tend to change in a somewhat continuous manner. \n\nThis means that modifying our diagonal complex weight matrix (rather than the entire transmission matrix) according to inferred bend context is a potentially powerful future method to address TM-change due to bending, as it gets to the core of how the TM changes. The reduction of parameters and faster learning can be used in networks that bring multiple configurations together, as we would then be interpolating between matrices scaling as O(N_modes) rather than O(N^4) for NxN images. Hence, one of the main intended benefits of our proposed architecture is in fact that it will help to address the challenge of changing TMs in bending fibres. \n\nWe would be happy to make this clearer in the final version and add this in as a limitation, with implications in a future work section, as we intend to build on this paper’s results to address bending in future papers, allowing progress to future systems which could generalize to new MMF configurations. (But a completely new MMF will always require calibration)\n\nResisi, S., Viernik, Y., Popoff, S.M. and Bromberg, Y., 2020. Wavefront shaping in multimode fibers by transmission matrix engineering. APL Photonics, 5(3), p.036103.\n\n* There are minor inaccuracies in the paper: Line 270 states reducing the calibration dataset size minimizes how long cells have to be imaged; cells are involved in the calibration of a TM.\n\nThank you for highlighting this line. While we are not quite sure what you mean by the cells being involved in the calibration (BTW - there is a typo on L270 though as it should read for imaging cells rather than as imaging cells), our intention had been to highlight examples where reducing the amount of training data could be beneficial in more than just time (requiring lots of physical samples to image because the repeated imaging required of a large dataset would damage physical samples is ameliorated by using our method). \n\nHowever, on re-reading we can see that it is clear that a lot more explanation of the specifics of the details of the end use case would be needed to avoid this being confusing, so it might be better for us to cut out this line to improve the focus and clarity of the paper, as none of the main claims or applications requires us to discuss this point in this paper.\n\n* Furthermore, to address the question: How would fitting a low rank approximation of the TM (or its inverse) compare to the proposed method?\n\nFitting a low rank approximation of the TM would require the TM to be known, which is not the case for a new fiber. There are works which seek to characterize the TM for a fiber, which are cited in Section1 “Introduction”, but these often require expensive experimentation and a large amount of data collection. Once this was known, a low rank approximation of the TM could be fit, which would reduce the computational expense of holding a TM in memory and performing the large matrix multiplication required to pass an image through a TM.\n\nOur method is arguably an example of a low rank approximation of the matrix, informed by the physics of the problem. You may be interested in the results of the new section A.15 added to the appendix which shows how performance changes as you remove further bases from the model.\n",
" * The only experimental validation is on (F)MNIST which isn't convincing.\n \nThis very brief comment does not make clear what is not convincing about these results. We interpret the comment on FMNIST as not being convincing as being related to the perceived simplicity of FMNIST as a task (but if we have misunderstood this, please detail what is not convincing (what does the reviewer think is erroneous in the results), and what would be needed to convince them (bearing in mind the state of the art in this field)?\n\nOur argument that FMNIST is not overly simple to use, is that in this paper we are not trying to classify these images (which is clearly a simple task), we are using them as a test of our reduced parameterisation optical fiber calibration process. If we were suggesting a more complex model (e.g. a multi-layer VAE) then the low-resolution images could well be argued to be vulnerable to overtraining if the model were over-parameterised. However, we are able to demonstrate that the vastly reduced parameterisation of the Bessel representation, in line with the physics of the fibre, works both on well-controlled, simulated, but much higher resolution images than the current state-of the-art, as well as on the lower-resolution lab experiments common in papers in the field as the current state of the art (published at NeurIPS by Moran et al. 2018, and in Nature Communications by Caramazza et al 2019), and which are therefore a sensible baseline for this work.\n\nAs further evidence of the rigor of our experimental work, in the appendix we report a series of tests of the robustness of the framework to changes in the quality of the available data:\nA.6 Results for Accounting for Losses in a Real System - testing diagonal assumption\nA.7 Effects of Noise on Speckled Images - we compare the effect of noise on the speckled images when using a theoretical TM.\nA.14 Impacts of Reducing Training Dataset Size\nAt the request of another reviewer we have added: A.15 Impacts of Under Parameterising the set of Bessel Function Bases - which investigates sensitivity of the result to using the full Bessel basis.\n\nIn summary, we tested the method on two different fibre models, we show with the loss values that our method performs well and visually that our method clearly produces the correct objects in both perfectly controlled simulations, and with real world experimental data, and in the appendix we systematically test the process with a range of conditions, and we provide executable code and data for others to use, so we believe that this contribution has had a suitable experimental validation to be a meaningful contribution at this stage, for others to build on. \n\n\n\n* The simulated speckle data (Fig 6) looks considerably more structured than the experimental data (Fig 5), suggesting the adopted model is inaccurate.\n\nTo provide some clarification, the results in Fig 3 in section “4.1 Real Multimode Fibre” are the results using the experimental lab data (Moran et al. 2018) are produced using a fiber with ~8000 modes taking as input 28x28 images and producing 224x224 speckled patterns, and the results in the section “4.2 Theoretical TM\" are the results using simulated data based on a completely different fiber. In figs 4 and 5 in section “4.2 Theoretical TM\" we use simulated data produced with a fibre with ~1000 modes taking as input 28x28 images and producing 180x180 speckled patterns, and the simulated data for fig 6 in section “4.2 Theoretical TM\" is produced using a fiber with ~1000 modes taking as input 256x256 images and producing 256x256 speckled pattern. Fig 3 is experimental data and Figs 4, 5, and 6 are simulated data. (See Appendix A.4 in the paper for details on the simulated TMs, and (Moran et al. 2018) for details on their experimental set-up.)\n\nConsidering the use of different fibres, it is expected that the speckled patterns could look different. However, you can see other papers in the published literature showing speckles in experiments which show both types of speckle structure visible in our paper, from the same MMF, depending on illumination, e.g. Figure 3 of https://opg.optica.org/oe/fulltext.cfm?uri=oe-20-10-10583&id=232812 which suggests that the speckles visible in our work appear very similar to those obtained in other labs for other fibres and experimental conditions.\n\n* Some details are missing. It seems the paper is considering only \"incoherent\" fiber bundles.\n\nThere is a misunderstanding here as we are **not** using fiber bundles. We are considering a single multi-mode fiber, as stated in 2.1 ‘Multi-mode fibres present a clear advantage over single-mode fibre bundles due to having 1-2 orders of magnitude greater density of modes than a fibre bundle’. \n",
" Thank you for your comment acknowledging the significantly reduced parameter count required to be learned by our model and how this could make imaging through a MMF easier. As we demonstrate in the paper, a further benefit is that our proposed architecture also makes it possible to image with higher-resolution images, which was not possible with previous dense methods. In addition, as we demonstrate in Fig 5 the reduced parameter count required to be learned by our model also improves generalization to new unseen datasets, which is a further benefit. \n\nWe will address the concerns below:\n\n* The paper makes extremely strong assumptions on the structure of the TM which are not adequately justified.\n* The paper provides no evidence that Bessel functions accurately describe the structure of higher-dimensional TMs. \n\nWe provide extensive details on the generation of our theoretical TMs in Appendix A4; this includes details about the fundamental physics of optical fibers, demonstrating that Bessel functions describe the structure of the modes of TMs. Furthermore, we provide details about TMs and make a connection to the group theoretic way of thinking used in equivariant neural networks in Appendix A8. \n\nBoth A4 and A8 provide known details about optical fibers and the equations governing light propagation. These sections provide the connection between the literature on light propagation through optical fibers and our model, which was perhaps missed because of the appendix being uploaded with the supplementary data. We have re-uploaded our paper, with the main text and appendix in a single document which is hopefully more convenient for the reviewers. \n\nWhile the submitted main paper linked explicitly to them *“we provide further details into the propagation of light through optical fibers and how we construct theoretical TMs in Appendix A.4 and further details in the inversion of the TM in Appendix A.5”*. We had this information in the appendix as we assumed it was well accepted and dates back to the seventies, see Gloge, 1971. Although, given the extra page allowed before the final paper we will transfer some of this explanatory background to the main body of the paper to make the rationale clearer to readers.\n\nIf these two sections in the appendix do not address your concerns please let us know and we will be happy to provide further clarification. \n\nD. Gloge, \"Weakly Guiding Fibers,\" Appl. Opt. 10, 2252-2258 (1971)\n",
" Thank you for your review and assessment of the strengths of our approach. The assessment of these strengths and the understanding of the model is correct. A further advantage of our approach is that it also generalizes better to new datasets, see Fig 5. \n\nWe will address the concerns below:\n\n* A minor criticism is that the authors could have cited a recent paper (Cite: Li, Shuhui, et al. \"Compressively sampling the optical transmission matrix of a multimode fibre.\" Light: Science & Applications 10.1 (2021): 1-15.) that also exploit the sparsity of the transmission matrix of a multimode fiber. The paper in my opinion should be cited and properly put in perspective with their result.\n\nThank you for pointing out this recently published paper. We have added in the citation for this paper and positioned it with respect to our work within our introduction, noting the exploitation of sparsity in the paper.\n\n\n* Modes of MMF are usually denoted LP modes in the literature, can the authors comments if they are the same as their Bessels? is it a different terminology, or is there a real difference?\n\nThis is just a difference in terminology, where we are trying to make the paper more accessible to a machine learning audience. For the LP modes one assumes linearly polarised light and solves the Helmholz equation with appropriate cylindrical boundary conditions to arrive at the spatial fields defined by Bessel functions. Some examples of which are given in Appendix A.4. We have added that these are LP modes in Figure 8 to make this clear.\n",
" Thank you for your review and acknowledging that our new model allows 256x256 image reconstruction for the first time, that the idea is interesting, and that our paper is well written. \n\nWe will address the concerns below:\n\n* The authors show that the bessel representation is effective in modeling the MM fiber output. However, there is no analysis on it's representation capability with respect to the number of bases.\n* My concern mainly lies in the analysis of the representation capability of the bessel basis. How accurate the basis is for modeling the MM fiber outputs, especially w.r.t. the number of bases used? What are the remaining components that cannot be modeled? Probably visualizing the full weight matrix might be useful to answer these questions.\n\nIn the paper (in 4.1) we discuss some mismatches between the basic theory of Bessel functions and the reality for a specific fiber (e.g. sharp bends, dopant diffusion, elliptical cores), and the paper explored non diagonal matrices (Sec 4.1) and post-processing layers (sec 3.2) for this reason.\nVisualizing a full Transmission Matrix is not usually easy to interpret directly, e.g. for 256 x 256 images we would have a 65536 x 65536 matrix (4,294,967,296 elements).\nDespite this, the reviewer makes an interesting suggestion regarding sensitivity to the number of bases used. We can explore the representation capability with respect to the number of bases, although it is also worth noting that there is usually a natural choice for the number of bases and this is driven by the fiber being used. We have performed the extra analysis requested, and added a new section in the appendix (the new A.15), which you can see in the attached updated pdf. This shows the impact of reducing the number of bases used for the task of reconstructing fMNIST images. Here we consider progressively removing high frequency bases by removing the highest frequency radial bases. We show results with the full basis set which comprises 21 radial frequencies (1061 bases) and reduced basis sets which comprise 14 radial frequencies (932 bases), 7 radial frequencies (567 bases), and 4 radial frequencies (322 bases). We hope the addition of this new section alleviates your concerns by showing that if the basis set is under-parameterised, such that high frequency bases are missing, the method can still achieve good reconstructions, although some higher frequency information is lost.\n\n\n* Second, many works using MM fibers for display applications modulate incident wavefront to generate high-fidelity images directly out of the fiber. Could the proposed method be also used for this application?\n\nIn principle this is possible. In an idealized fiber, as is the case with the simulated transmission matrix, it would be a simple case of adding output spots as needed for the image. For real data, the phase between spots will play a role by producing an interference between neighboring output spots. This can be iteratively mitigated with knowledge of the full transmission matrix as shown in Figure 6 in work by Cizmar and Dholakia https://opg.optica.org/oe/fulltext.cfm?uri=oe-19-20-18871&id=222508. However, as far as this work has investigated, it remains unclear as to whether enough information about the transmission matrix is collected to do this projection. We appreciate the interesting suggestion. ",
" This paper proposes to use a bessel basis and a post-processing neural network for modeling the optical transmission through a multi-mode optical fiber. The key idea is that multi-mode fiber outputs are band-limited with respect to the bessel functions of different modes. Applying this representation for the multi-mode fiber output allows us to reduce the number of parameters significantly, reducing the computational burden of inverse estimation: from a speckle image to a target natural image.\n Strengths\n- For the first time, a 256x256 image is reconstructed from a speckle image, thanks to the efficient parameterization.\n- Using the bessel representation is a neat idea for the neural modeling of the multi-mode fiber outputs.\n- The paper is well written.\n\nWeaknesses\n- The authors show that the bessel representation is effective in modeling the MM fiber output. However, there is no analysis on it's representation capability with respect to the number of bases. \n My concern mainly lies in the analysis of the representation capability of the bessel basis. How accurate the basis is for modeling the MM fiber outputs, especially w.r.t. the number of bases used? What are the remaining components that cannot be modeled? Probably visualizing the full weight matrix might be useful to answer these questions.\n\nSecond, many works using MM fibers for display applications modulate incident wavefront to generate high-fidelity images directly out of the fiber. Could the proposed method be also used for this application? The authors adequately described the limitations.",
" This paper assumes that the transmission matrices (TMs) associated with multimodal fibers (MMF) are diagonizable using a Bessel Function Basis. Using this assumption, the authors propose a new learning-based technique for imaging through MMFs. The speckle image is first multiplied by the learned diagonal inverse TM and is then denoised by a post-processing CNN. The proposed method is tested on low-resolution toy (MNIST and FMNIST) experimentally captured data from (Mora et al. 2018). It is also tested on simulated TM data on higher-res image-net data. ## Strengths \n\nThe diagonal assumption massively reduced the number of parameters that need to be learned and could make imaging through MMF much easier. \n\n## Weaknesses\n\nThe paper makes extremely strong assumptions on the structure of the TM which are not adequately justified. The only experimental validation is on (F)MNIST which isn't convincing. The paper provides no evidence that Bessel functions accurately describe the structure of higher-dimensional TMs. The simulated speckle data (Fig 6) looks considerably more structured than the experimental data (Fig 5), suggesting the adopted model is inaccurate.\n\nSome details are missing. It seems the paper is considering only \"incoherent\" fiber bundles.\n\nThere's no evidence the proposed method generalizes across MMFs.\n\nThere are minor inaccuracies in the paper: Line 270 states reducing the calibration dataset size minimizes how long cells have to be imaged; cells are involved in the calibration of a TM.\n\n How would fitting a low rank approximation of the TM (or its inverse) compare to the proposed method? Not discussed. Method is specific to a particular MMF. Moreover, if that fiber bends (like it would in endoscopy) that fibers TM is likely to change.",
" The paper reports on a novel NN architecture for image transmission through a multimode fiber. The approach is physics based, and rely on modeling a NN using bessel modes, that mimicks the pre-existing modes in a MMF. the main advantage is to reduce strongly the number of parameters to train (compared to a complex-valued fully connected layer, the current SoTA) and the required number of examples, while maintaining similar performances and generalization ability. The concept is validated numerically and experimentally, at scale (256*256) previously unattainable. The paper is overall clearly written, the results are sound, and the comparison to the previous litterature is qualitatively and quantitatively appropriate. The main advantage of the technique (less parameters) meaning less memory usage and less examples needed, means that it can scale to complex and large scale images, which is demonstrated. \n\nA minor criticism is that the authors could have cited a recent paper (Cite: Li, Shuhui, et al. \"Compressively sampling the optical transmission matrix of a multimode fibre.\" Light: Science & Applications 10.1 (2021): 1-15.) that also exploit the sparsity of the transmission matrix of a multimode fiber. The paper in my opinion should be cited and properly put in perspective with their result. -Modes of MMF are usually denoted LP modes in the literature, can the authors comments if they are the same as their Bessels? is it a different terminology, or is there a real difference? \n I think it really doesn't apply here. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"qTxaO861ByR",
"Ki9s_sWyLW",
"OC0aM2YVtXe",
"msIn9z1u8BK",
"OC0aM2YVtXe",
"P1KOGy_Iy8J",
"HzDeNvC4t_h",
"4Jem9hOzTJu",
"nips_2022_0zlLhfG6rxI",
"nips_2022_0zlLhfG6rxI",
"nips_2022_0zlLhfG6rxI"
] |
nips_2022_WE92fqi-N_g | VICE: Variational Interpretable Concept Embeddings | A central goal in the cognitive sciences is the development of numerical models for mental representations of object concepts. This paper introduces Variational Interpretable Concept Embeddings (VICE), an approximate Bayesian method for embedding object concepts in a vector space using data collected from humans in a triplet odd-one-out task. VICE uses variational inference to obtain sparse, non-negative representations of object concepts with uncertainty estimates for the embedding values. These estimates are used to automatically select the dimensions that best explain the data. We derive a PAC learning bound for VICE that can be used to estimate generalization performance or determine a sufficient sample size for experimental design. VICE rivals or outperforms its predecessor, SPoSE, at predicting human behavior in the triplet odd-one-out task. Furthermore, VICE's object representations are more reproducible and consistent across random initializations, highlighting the unique advantage of using VICE for deriving interpretable embeddings from human behavior. | Accept | This paper proposes a method to learn meaningful representations of data by incorporating a pick-odd-one-out task on triplets of images to learn embeddings through variational inference using a spike-and-slab Gaussian prior.
The reviewers agreed that the paper was well written, had a clear narrative, that the results appeared convincing and that the use of the spike-and-slab prior to determine appropriate dimensionality was novel and interesting.
Where pertinent questions were raised by reviewers on the exposition of the estimation of the upper bound, on the validity of the triplet task and its phrasing, and on the utlity of employing VI over a prior model (SPoSE), the authors provided responses that appear to address these issues reasonably.
The primary issue with the manuscript appears to be mainly with framing. A graphical model, with annotations for the triplet observations, or something similar---a figure to explain the model basically---would have helped make things a bit easier to situate for the reader.
On balance, though it appears the paper has more merits than issues. I would strongly urge the authors to actually make the edits discussing other baselines (Reviewer VDhE), and potentially having the discussion on determining the number of latent dimensions in the main paper rather than the supplement as it appears to be an important distinguishing feature over prior work.
| train | [
"06FlhTKaVaU",
"9F0umk-7Pom",
"pHLspvFF4Gu",
"Aa5hWEgH5N",
"9M_TEVtg21",
"6-TR1nReXrM",
"G_kN0Pqkzeb",
"3XHnpwMkIgv",
"VjRUZq88Vtp",
"AcrSijBMF5E"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **It's unclear for the neural network settings and how to get the embedding vectors.**\n\nVICE has access to the human responses rather than to the image representations of the objects. From these responses, VICE learns an embedding representation for each object. Although there is a softmax function involved to get a probabilistic estimate for the odd-one-out choice in a triplet, given the learned embeddings, VICE does not use a neural network to map input to latent representations.\n\nOur method is related to Bayesian non-negative matrix factorization but uses variational inference and gradient descent instead of iterative coordinate descent solvers. VICE learns two weight matrices, one weight matrix for the means of the embedding values ($\\mu$) and another weight matrix for the standard deviations of the embedding values ($\\sigma$). Using the reparametrization trick (described in line 139), for each mini-batch, we sample an embedding matrix, $X \\in \\mathbb{R}^{m \\times d}$, and select the object embeddings with one-hot vectors. Recall that each of the $m$ objects is assigned a numerical index. This is described in detail in §3.3.1.\n\nAfter convergence, one can use the means of the embedding values as representative object embeddings for interpretability purposes. This is what we did in §4.7.\n\n**Since it is a supervised method, how about the efficiency of labeling a pair from a triplet? Why not label the concepts directly?**\n\n**Some work could promote dimensional explanation without supervision, such as disentanglement learning. The main limitation is the problem settings in the odd-one-out triplet task.**\n\nWe agree that it would be possible to use pre-existing methods to transform objects into embeddings (e.g., via a neural network), and that one could use disentangled representations for that purpose. This would be appropriate if the primary goal was to predict human behavior. In our case, however, the odd-one-out task is meant to elicit the use of object-relevant knowledge without biasing the experimental participants. This is what makes it possible to identify embedding dimensions that explain behavior, without the need to postulate them *a priori*.\n\n**What are the effects of the biases from judgments? It's important but missed. For instance, will the learned representations from ordinary people be different from those from specialists?**\n\nAs multiple reviewers raised this question, please see our general response.\n\n**Does the Spike-and-Slab prior for p(X) come from an empirical observation?**\n\nAs multiple reviewers raised this question, please see our general response.\n\n**Why the embeddings should be positive?**\n\nIn informal discussions with the authors of Zheng et al. (2019), we learned that the authors initially considered real-valued embeddings for objects. However, the result was finding a small number of very difficult-to-interpret embeddings, similar in properties to well-known word embeddings such as Word2Vec or synsets. Yet, when adding a non-negativity constraint, the number of dimensions was seen to increase, the prediction performance improved, and the interpretability of the dimensions was dramatically improved. This was not entirely surprising, given that the association between interpretability and non-negativity is widely observed in the literature across a large range of machine learning models. Intuitively, one can view a non-negative dimension as standing for the degree of presence of a particular characteristic. One theoretical account of this phenomenon is given in [4].\n\nWith respect to concept embeddings, it is still not clear whether the success of non-negative embeddings is due to theoretical reasons owing to favorable mathematical properties of non-negative embeddings, or because the human brain actually uses non-negative representations. In VICE embeddings on word stimuli, we have seen that properties that we might intuitively consider to lie on a positive-to-negative spectrum are sometimes represented in the model by two different, positive embeddings: one corresponding to the positive side of the spectrum, and the second corresponding to the negative. While this is suggestive that the brain represents negative qualities as a separate scale from positive qualities, it is not possible to resolve these questions without further scientific investigation.\n\n**Reference**\n\nLast, but not least we thank the reviewer for pointing us to the interesting reference “DEFT: Distilling Entangled Factors by Preventing Information Diffusion.”\n\n**Final note**\n\nAs a final note, we thank the reviewer for their feedback and for suggesting that we discuss the effects of potential biases in the data. We will discuss this further in the paper, as mentioned in the general response.\n\n**References**\n\n[4] Donoho, David, and Stodden, Victoria. \"When does non-negative matrix factorization give a correct decomposition into parts?\" Advances in neural information processing systems 16 (2003).\n\n",
" **Overall, I think the novelty of this paper is limited. Essentially, the idea of learning a sparse, positive (and hence interpretable) semantic space to be consistent with human similarity judgements (or finish the odd-one-out triplet task) is no different from SPoSE (by Charles et al). Following the same modeling process, VICE just makes some improvements over SPoSE with the help of VI techniques. The choice of Gaussian variational distributions for ease of reparameterization and the use of a spike-and-slab prior to induce sparsity are also based on previous successful experiences, so the ideas are not new there.**\n\nThe novel inference method is the primary contribution of this paper. We have demonstrated that it meets the state-of-the-art for prediction performance on current datasets while producing more stable representations with lower dimensionality. Furthermore, our method substantially outperforms previous methods in low data regimes. This is of great interest to the cognitive science community since data collection can be very expensive and thus our method is likely to have a high impact.\n\nMoreover, Bayesian extensions of effective models are of general interest to the ML community. See for example the following papers that offer Bayesian approaches to matrix/tensor methods ($S \\coloneqq XX^{T}$ in our paper in line 109 is low-rank, so our method is related):\n\n- Modelling Relational Data using Bayesian Clustered Tensor Factorization (NeurIPS 2009)\n- Statistical mechanics of low-rank tensor decomposition (NeurIPS 2018)\n- A Particle-Based Variational Approach to Bayesian Non-negative Matrix Factorization (JMLR 2019)\n\nThe main (technical) differences between SPoSE and VICE are described in §1, lines 40-63, but we will highlight them further in the main text if our paper gets accepted.\n\n**It seems that the real novelty comes more from the derivation of a PAC bound on the generalization of SPoSE and VICE models, but I’m not sure I understand it since the author does not present the proof clearly enough.**\n\nThe PAC bound for this type of model is indeed novel. We have added a detailed proof of the proposition to the appendix, which we reference in the main text. With the full proof, the bound is actually slightly improved, which we have also incorporated into the main text and into the quantization algorithm in the Appendix.\n\n**Final note** \n\nAs a final note, we thank the reviewer for their feedback and for suggesting to clarify the proof of the PAC bound on the generalization performance of SPoSE and VICE models. Through their suggestion, we were able to improve the bound.",
" **The authors claim that one of the property of VICE is that VICE has an automated procedure for determining the number of dimensions needed to explain the data. Therefore, there isn't much need for running lots of configurations to tune hyper parameter. However, reading section 3.3.2, there is still a large number of hyper parameters. Even if there is a commonly used threshold present, it doesn't erase the fact that these hyper parameters still needs to be tuned to achieve the best modelling performance.**\n\nAt this point, we would like to reiterate that the _learned representations_ are the central object of interest in this work. These representations are of significant interest to cognitive scientists [1]. One wants good modeling performance on the classification task since this indicates that they have captured an understanding of how humans relate objects. If one were to find a classifier that has the absolute best classification performance, then we would agree with your suggestion, that one should consider using many dimensions, even hundreds. However, this would be of little use for cognitive scientists, since there would be a great loss of interpretability.\n\nThe cut-off value $k$ is not a data-dependent hyperparameter, it is a standard threshold [2,3] for the minimum number of items that compose a psychologically relevant dimension. We find that the recovered representations are insensitive with respect to pi, and the scales for the spike and slab distributions, and so is predictive performance (see §4.5). For sufficiently large d we observed that the method consistently prunes to similar representations, regardless of the choice of d (note the low variance in Table 1 “Selected Dims.”). This point regarding sufficiently large d is mentioned in Appendix D, but not in the main text. In light of your comment, we’ve decided to include mention of this in the main text.\n\n**Final Note**\n\nWe thank the reviewer for their feedback, and respectfully ask if they might consider increasing their score in light of our responses.\n\n**References**\n\n[1] Hebart, Martin N., et al. \"Revealing the multidimensional mental representations of natural objects underlying human similarity judgements.\" Nature human behaviour 4.11 (2020): 1173-1185.\n\n[2] Barry J. Devereux, Lorraine K. Tyler, Jeroen Geertzen, and Billi Randall. The centre for speech, language and the brain (CSLB) concept property norms. Behavior Research Methods, 46 (4):1119–1127, December 2013. doi: 10.3758/s13428-013-0420-4.\n\n[3] Ken McRae, George S. Cree, Mark S. Seidenberg, and Chris Mcnorgan. Semantic feature production norms for a large set of living and nonliving things. Behavior Research Methods, 37 (4):547–559, November 2005. doi: 10.3758/bf03192726.438\n",
" **\"The paper claims to work on a very novel task of modelling human behaviour in an odd-one-out triplet task... it is also not clear how this task is associated with the concept embedding goal being evaluated. It is completely ok if learning a interpretable concept embedding is a nice side benefit of the task, or vice versa, but it would be clearer if the authors discuss this.\"**\n\nThe problem that we address in the paper is understanding mental representations of objects, through the embedding vectors discovered through our model. These object embeddings are of wide scientific interest, as demonstrated by the citations of the Nature Human Behavior [1] article introducing SPoSE (a journal extension of [Zheng et al 2019]). Obtaining interpretable embeddings is the primary motivation of our work, not a side benefit. Indeed, we only consider model performance as a secondary consideration compared to the embeddings themselves: the performance of the model is informative to us only to the extent that it validates our assumptions about the data (e.g. spike and slab prior, sparsity), and demonstrates efficient use of the human behavior data As such the novelty of our approach lies primarily in our principled use of variational techniques and technical innovations, including the development of pruning procedures and convergence criteria, that optimize the reproducibility of the embeddings, rather than our adherence to any particular behavioral task.\n\nWe model the triplet task because cognitive scientists deem it to be a good way of probing a participant’s intuition about object similarities, without biasing them. An alternative approach would be to ask participants for numerical ratings of similarity (e.g. on a scale of 0-10) for pairs of objects, instead of triplets. However, participants may differ in how they calibrate their rating scales and are often inconsistent. Two participants with potentially the same notion of object similarity might give quite different numerical ratings, because each person's way of converting their intuitive sense of similarity to a numerical rating may be accomplished through an individually unique and somewhat arbitrary mapping. In contrast, two participants with the same internal representations of object similarities should be able to agree with regard to which object is the least similar to the other two. \n\n**“The paper needs more proof reading. For example, line 22 \"In an alternative, detective, approach, ...\" it look me a while to figure out the grammar here and to understand this sentence.”**\n\nThanks for pointing out the confusing comma usage. We’ve cleaned up our comma usage in line 22 and elsewhere.\n\n**“Triplet task seems to be a classification task. In line 104, the authors mentioned that the dataset is a set of ordered pairs. Why does the pairs need to be ordered? Does the ordering encode certain information?”**\n\nFor a sample in the training data $\\mathcal{D}$, the first entry contains the objects presented to the participant and the second entry contains the two objects selected as being the most similar. So, there technically is an ordering. We’d be happy to remove “ordered” if you feel it is less likely to be confusing.\n\n**Prior choice. Spike-and-Slab is chosen in this work based on analysis and intuition. Since this is a major novelty component of the proposed model, it would be more convincing if the authors could provide some experiments on the choice of prior. Such a comparison would back up the analysis on previous work and support the choice of Spike-and-Slab as prior.**\n\nAs this concern was also raised by another reviewer, please see our general response.\n",
" **Qualitative results for both SPoSE and VICE would be helpful**\n\nIn addition to the VICE dimensions, we added qualitative results for SPoSE to the supplementary material of our revised manuscript.\n\n**I am not sure if there are other baselines possible for such a paper (such as sysnet and NNSE used in Zhang et al.)**\n\n[Zheng et al., 2019] compared the performance of SPoSE vectors with synset vectors and NNSE vectors in terms of accuracy at predicting behavior in the odd-one-out-task and other tasks. These are reasonable baselines, in that synset vectors are the only text-derived embeddings that pertain to each object (rather than the word naming it), and NNSE vectors are a text-derived embedding that is sparse and positive. However, as both synset vectors and NNSE vectors generally performed worse than SPoSE in [Zheng et al.], we decided against using them as a baseline in this paper, since the focus is on comparing SPoSE and VICE. If the paper is accepted, we will add a note to this effect in the discussion section of the paper.\n\n**I am not sure what the authors mean by: \"We estimate an upper bound on the prediction accuracy by using the repeats in the test set.\" How is the upper bound in accuracy found?**\n\nFor the human response data for THINGS (Zheng et al. 2019; Hebart et al., 2020) and Adjectives, a random subset of triplets was chosen to be presented multiple times to different participants. For a given triplet - repeated over many participants - this provides a way to estimate the distribution of responses over all participants. If the response distribution is (0.2, 0.3, 0.5) for a given triplet, then the best predictor for the participants' responses is the third object. This results in an accuracy score of 50%, averaged across repetitions. Alternatively, one may observe a distribution of (0.1,0.8,0.1) for a different triplet. The best one could do is to identify the second object as the odd-one-out, and get 80% accuracy. From this, we can see that no classifier can do worse than 33%.. Taking the average best prediction accuracy over all of the repeated triplets gives us an estimate for the best possible average prediction score. This is the upper bound.\n\nWe’ve added this to the supplementary material for clarity.\n\n**The authors do not discuss any societal limitations. It would be good to discuss if in the compression process if it is possible that the model is implicitly encoding any biases…**\n\nMultiple reviewers raised this question, please see our general response.\n\n**Final note**\n\nWe thank the reviewer for their feedback and their suggestion on discussing potential societal limitations. As mentioned in the general response, we will add further discussion of this to the paper.\n",
" We would like to thank all reviewers for taking the time to provide thoughtful reviews. We were pleased to see that the reviewers unanimously recommended the paper for acceptance and found many positive aspects to our paper.\n\n**VDhE**: “The paper seems theoretically sound as it improves considerably over the baselines. It is written and explained well and has a clear story.”\n\n**NZMy**: “There is a lot of discussion on the intuition of the design of VICE, and discussion on comparison between VICE and SPoSe. I really enjoyed reading these discussions.”\n\n**fyj5**: “The paper is overall clearly written and well-organized. The experiment section clearly states the setup/evaluation protocol and presents detailed analyses.”\n\n**tLve**: “This work encouraged interpretable representations, which is highly desirable for robust and trustworthy AI systems.”\n\n\n\nHere we address a couple of points that were shared among more than one reviewer. We have also provided individual reviewer responses.\n\n**-Potential biases in the data-**\n\n**Reviewer VDhE**: “The authors do not discuss any societal limitations… it is possible that the model is implicitly encoding any biases that may exist in the data, since its directly labeled by humans, and there is not an absolute \"ground truth\"\n\n**Reviewer tLve**: “What are the effects of the biases from judgments? It's important but missed. For instance, will the learned representations from ordinary people be different from those from specialists?”\n\n**Response**: The goal of our method is to identify general mental representations of objects. The dimensions identified by a model reflect semantic characteristics that explain task performance for many subjects in the experimental subject population (Amazon Mechanical Turk subjects in the United States). As such, it is possible that they reflect biases widespread in that population. Furthermore, the choice of population may affect the identified dimensions. That is, a chéf may classify food items differently from a lay subject, and a linguist would likely have a more complex representation of an adjective. The effects of expertise or developmental stage in mental representations are of obvious interest to cognitive scientists. Therefore, we envision further research in those areas, which may additionally provide some indication of how widely representations can vary. \n\nWe will add an additional paragraph including these points to a revised version of the manuscript.\n\n**-The use of a spike-and-slab prior-**\n\n**Reviewer NZMy**: “Prior choice. Spike-and-Slab is chosen in this work based on analysis and intuition. Since this is a major novelty component of the proposed model, it would be more convincing if the authors could provide some experiments on the choice of prior. Such a comparison would back up the analysis on previous work and support the choice of Spike-and-Slab as prior.”\n\n**Reviewer tLVE**: “Does the Spike-and-Slab prior for p(X) come from an empirical observation?”\n\n**Response**: Firstly, we have tried a Laplace prior and our results on the validation data were not better than SPoSE. This was our original motivation to consider other priors. Secondly, spike-and-slab priors are commonly used in the literature as a sparsity-inducing prior alternative to the Laplace prior. Thirdly, the distribution of SPoSE dimensions empirically appeared to be better modeled by a spike-and-slab distribution. This is described in §3.3.1, lines 149-154,\n\n\n“As discussed above, SPoSE induces sparsity through an $\\ell_{1}$ penalty which, along with the non-negativity constraint, is equivalent to using an exponential prior. Through examination of the publicly available histograms of weight values in the two most important SPoSE dimensions (see Figure 2 in B.3), we observed that the dimensions did not resemble an exponential distribution. Instead, they contained a spike of probability at zero and a wide slab of probability for the non-zero values. To model this, we use a *spike-and-slab* Gaussian mixture prior.”\n\nLastly, if a zero-mean Gaussian prior was better suited to our data than the spike-and-slab Gaussian mixture prior, we should have seen the components of the mixture collapse into one of the two Gaussians, which we did not observe.",
" The paper proposed VICE, a way to learn interpretable embeddings that mimic that of a human. The paper is different from its main baseline (SPoSE) by using variational inference with a \"spike and slab\" Gaussian prior. The paper improves upon SPoSE by not having a hyperparameter that finds the # of embedding dimensions to ensure sparsity. VICE is able to improve upon SPoSE in the low-data regime, which the paper argues, is the most relevant setting in cognitive science. Strengths: \n- The paper seems theoretically sound as it improves considerably over the baselines\n- The paper is written and explained well and has a clear story\n- The results in the low data regime are very convincing as the model improves performance by ~20% in the lowest-data regime.\\\n- The dimensionality reduction procedure and spike-and-slam priors used by the paper is interesting.\n\nWeaknesses:\n- It would be preferred if the paper expanded on Fig 1, to give readers a better intuitive example of what the structure of the dataset is and some gt labels in the dataset, since it isn't obvious which one of the thruple is the \"odd-one-out\"\n- Qualitative results for both SPoSE and VICE would be helpful (instead of just for VICE\n- I am not sure if there are other baselines possible for such a paper (such as sysnet and NNSE used in Zhang et al.)\n- I am not sure what the authors mean by: \"We estimate an upper bound on the prediction accuracy by using the repeats in the test set.\" How is the upper bound in accuracy found? Please see above. The authors do not discuss any societal limitations. It would be good to discuss if in the compression process, if it is possible that the model is implicitly encoding any biases that may exist in the data, since its directly labeled by humans, and there is not an absolute \"ground truth\" available for such datasets.",
" In this paper, a model named VICE is introduced for learning interpretable object concept embeddings by modelling human behaviour in an odd-one-out triplet task. It is shown that VICE predicts human behaviour close to the estimated best attainable performance across three datasets and that VICE outperforms a competing method SPoSE, in low sample regimes. In addition, the authors claim that VICE has an automated procedure for determining the number of dimensions needed to explain the data. Strength:\n1. The paper is relatively well written and the proposed method is easy to follow and understand.\n2. The proposed method VICE is technically correct, including modelling definition, equations and derivations.\n3. Both qualitative and quantitative comparisons are given to showcase the modelling performance and certain properties of the proposed method.\n4. There is a lot of discussion on the intuition of the design of VICE, and discussion on comparison between VICE and SPoSe. I really enjoyed reading these discussions.\nWeakness:\n1. The paper claims to work on a very novel task of modelling human behaviour in an odd-one-out triplet task. However, after reading the description, it seems like a common classification task. You are given three input, and a set of two as label. It would be informative if the authors can expand the discussion on the task description part.\n2. Related to point 1, it is also not clear how this task is associated with the concept embedding goal being evaluated. It is completely ok if learning a interpretable concept embedding is a nice side benefit of the task, or vice versa, but it would be clearer if the authors discuss this.\n3. The paper needs more proof reading. For example, line 22 \"In an alternative, detective, approach, ...\" it look me a while to figure out the grammar here and to understand this sentence.\n4. Triplet task seems to be a classification task. In line 104, the authors mentioned that the dataset is a set of ordered pairs. Why does the pairs need to be ordered? Does the ordering encode certain information?\n5. Prior choice. Spike-and-Slab is chosen in this work based on analysis and intuition. Since this is a major novelty component of the proposed model, it would be more convincing if the authors could provide some experiments on the choice of prior. Such a comparison would back up the analysis on previous work and support the choice of Spike-and-Slab as prior.\n6. The authors claim that one of the property of VICE is that VICE has an automated procedure for determining the number of dimensions needed to explain the data. Therefore, there isn't much need for running lots of configurations to tune hyper parameter. However, reading section 3.3.2, there is still a large number of hyper parameters. Even if there is a commonly used threshold present, it doesn't erase the fact that these hyper parameters still needs to be tuned to achieve the best modelling performance. Please refer to the previous section for detailed comments and discussion. yes",
" This paper introduces VICE, a variational inference method for embedding object concepts into a vector space so as to obtain sparse and non-negative representations of them. VICE follows the same approach as its predecessor (SPoSE) to model the odd-one-out triplet task, but addresses SPoSE's major limitations by taking advantage of recent VI techniques. In addition, it also derives a PAC learning bound for SPoSE and VICE models, which can be used to estimate generalization performance or determine sufficient sample size. Pros:\n\nThe paper is overall clearly written and well-organized. The experiment section clearly states the setup/evaluation protocol and presents detailed analyses.\n\nCons:\n\nOverall, I think the novelty of this paper is limited. Essentially, the idea of learning a sparse, positive (and hence interpretable) semantic space to be consistent with human similarity judgements (or finish the odd-one-out triplet task) is no different from SPoSE (by Charles et al). Following the same modeling process, VICE just makes some improvements over SPoSE with the help of VI techniques. The choice of Gaussian variational distributions for ease of reparameterization and the use of a spike-and-slab prior to induce sparsity are also based on previous successful experiences, so the ideas are not new there. It seems that the real novelty comes more from the derivation of a PAC bound on the generalization of SPoSE and VICE models, but I’m not sure I understand it since the author does not present the proof clearly enough.\n (1) The main difference from SPoSE need to be highlighted, technically.\n N/A",
" This work comprehensively analyzed the limitations of SPoSE from the aspects of the posterior distribution, estimation method, and convergence criterion. The proposed method VICE addressed the above limitations with elegant mathematical support. The authors reinterpreted the objective of SPoSE from the perspective of maximizing a posteriori (MAP) estimation, which is insightful for future work. The proofs are solid, especially dimensionality reduction and convergence. The experimental results were also positive to support the claims. Strengths :\n+ This work explicitly described the introduced assumptions and how to induce the proposed method. This part is easy to read and understand.\n+ This work originally proposed VICE with a unimodal posterior for representing each object and variational inference for optimization to learn from human judgments about object similarity.\n+ This work encouraged interpretable representations, which is highly desirable for robust and trustworthy AI systems.\n\nWeaknesses:\n- It's unclear for the neural network settings and how to get the embedding vectors.\n- What are the effects of the biases from judgments? It's important but missed. For instance, will the learned representations from ordinary people be different from those from specialists?\n\n - Since it is a supervised method, how about the efficiency of labeling a pair from a triplet? Why not label the concepts directly?\n- Does the Spike-and-Slab prior for p(X) come from an empirical observation?\n- Why the embeddings should be positive? This method needs numerous labels from human judgments. However, some work could promote dimensional explanation without supervision, such as disentanglement learning. The main limitation is the problem settings in the odd-one-out triplet task. Despite the limitations, I am looking forward to seeing the benefit of supervision and possible approaches to building large triplet datasets by automatic techniques.\n\nReference:\n1. DEFT: Distilling Entangled Factors by Preventing Information Diffusion"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
3,
3
] | [
"AcrSijBMF5E",
"VjRUZq88Vtp",
"Aa5hWEgH5N",
"3XHnpwMkIgv",
"G_kN0Pqkzeb",
"nips_2022_WE92fqi-N_g",
"nips_2022_WE92fqi-N_g",
"nips_2022_WE92fqi-N_g",
"nips_2022_WE92fqi-N_g",
"nips_2022_WE92fqi-N_g"
] |
nips_2022_vRwCvlvd8eA | Chefs' Random Tables: Non-Trigonometric Random Features | We introduce chefs' random tables (CRTs), a new class of non-trigonometric random features (RFs) to approximate Gaussian and softmax kernels. CRTs are an alternative to standard random kitchen sink (RKS) methods, which inherently rely on the trigonometric maps. We present variants of CRTs where RFs are positive, a key requirement for applications in recent low-rank Transformers. Further variance reduction is possible by leveraging statistics which are simple to compute. One instantiation of CRTs, the optimal positive random features (OPRFs), is to our knowledge the first RF method for unbiased softmax kernel estimation with positive and bounded RFs, resulting in exponentially small tails and much lower variance than its counterparts. As we show, orthogonal random features applied in OPRFs provide additional variance reduction for any dimensionality $d$ (not only asymptotically for sufficiently large $d$, as for RKS). We test CRTs on many tasks ranging from non-parametric classification to training Transformers for text, speech and image data, obtaining new state-of-the-art results for low-rank text Transformers, while providing linear space and time complexity. | Accept | The primary motivation of the paper is the scalable training of transformers, particularly their efficient softmax-attention approximation (3). As a classical approach relying on trigonometric random Fourier features (RFF) does not guarantee positivity (which makes the training of transformers unstable), the authors consider non-trigonometric RFFs. Particularly, they propose a GERF (generalized exponential random features, specified in (4)) for the approximation of the Gaussian kernel (and the softmax kernel) which beyond the previously designed positivity of RFFs [15], can also ensure the boundedness of the random features. They analyze when these RFs give rise to unbiased kernel approximation (Theorem 3.1), establish their variance (Theorem 3.2 for fixed (x,y) inputs), and restricting its free parameters (A to be real, s = 1) they specialize the design to the minimum variance estimator referred to as OPRF (optimal positive random feature). They show tail bounds (Theorem 4.2) and attention approximation guarantees for OPRFs. They also present a DIRF (discretely-induced random features) construction with focus on the Poisson and Geometric designs, which with a shift (Section 3.2.3) can be turned to be positive. The resulting CRT (chefs' random tables) kernel approximation family is illustrated in classification (on UCI benchmarks) and in training transformers (in NLP, audio and image context).
Kernel methods are without doubt at the forefront of machine learning; developing new kernel approximation schemes is of significant interest to the NeurIPS community. As it was assessed by the reviewers, the authors present novel and valuable theoretical insights in this respect, with convincing numerical illustrations. As the reviewers pointed out the manuscript could be improved somewhat by (i) more detailed references to results from prior work, and (ii) making it more accessible to wider audience. Please incorporate these comments in the final version of the manuscript. | train | [
"PjmyUMJb8b9",
"Gn9xKjDqCnt",
"Xhx06X2jKkc",
"i6yoGhWMi7t",
"QiYI1jcrRJN",
"mFXVGs7Dkh",
"4YXrXARiVX0",
"8Vx3n8YDEeD",
"tyBA_IGN1hv",
"QHPpS1KMXo",
"FxNEp3oiRdN",
"zMG-0xCex9S"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you again for your time and helpful comments. We hope we have addressed your concerns and you might consider raising your score. If you have any further concerns, we'd be glad to address them.",
" We would like to sincerely thank the Reviewer for all the comments.\n\n*It seems that the discretely-induced RFs are sensitive to outliers*\n\nThank you for the comment. While this is a valid concern regarding DIRFs, in the end we have to run an experiment and see which variant performs best in practice. What we find out experimentally is that a) in Figure 2, in `sphere` and `heterogen` regimes, GeomRF+ outperforms other positive-valued variants and b) in Table 1, GeomRF is the second best method after OPRF. Hence, we conclude that there might be scenarios where DIRFs are useful for practitioners. Also, the OPRF method proposed in our paper doesn't raise mentioned concerns.\n\n*Some symbols in Theorem are given without definition e.g., C(|x+y|) in Theorem 4.1.*\n\nThe main feature of C in Theorem 4.1 is that it's constant with respect to d, M. For further clarification, we added an explicit definition of C in the proof of Theorem 4.1 in Appendix (lines 614-615).\n\n*Does the proposed OPRF and discretely-induced RF scheme still construct positve RFs for new data instead of the given data X?...*\n\nThat is a good point unless the data is given beforehand (as it is in Transformers) or we have prior knowledge that the data is lower-bounded by C. Note that this is not the case for OPRF which is positive-valued for any inputs.\n\n*Could the authors discuss more the experiment in Figure 2?...*\n\nThank you very much for the comment. Our intuition on why GERF achieves a significant gain in top plots is that GERF generalizes TrigRF, PosRF and OPRF. Though it is not positive-valued as OPRF or PosRF and that is why we plot it in the top row. The bottom row corresponds to positive-valued variants. We plot them separately because positivity is a restriction which might result in a higher variance, but it's important for the application in Transformers. In the revision, we added a plot where all variants are drawn together (Figure 5 in the Appendix).\n\n\n*Could the authors provide the comparison of variance w.r.t the number of features (M)?* \n\nFigure 2 corresponds to M=1. Choosing M > 1 would result in the same analytical variance divided by M for all methods so it's unnecessary. \n\nHere we report ablation accuracy results over M for the kernel classification experiment:\n\nM | TrigRF | PosRF | GERF | PoisRF | GeomRF | OPRF | PoisRF+ | GeomRF+\n\n16 | 35.5 | 46.9 | 47.4 | 49.0 | **49.1** | 48.5 | 41.3 | 43.2\n\n32 | 35.5 | 50.5 | 51.2 | 52.2 | **52.6** | 51.8 | 44.1 | 46.5\n\n64 | 35.5 | 51.3 | 54.0 | 54.2 | 55.2 | **55.4** | 47.0 | 50.0\n\n128 | 35.5 | 54.3 | 56.1 | 55.5 | 56.8 | **57.8** | 49.9 | 52.3\n\n256 | 35.5 | 55.6 | 58.1 | 56.5 | 57.8 | **59.7** | 51.9 | 55.0\n\nWe see that OPRF consistently outperforms the baselines (TrigRF, PosRF) and also outperforms or is competitive with other methods proposed in the paper. Further, OPRF shows the best performance among positive-valued random features (PosRF, OPRF, PoisRF+, GeomRF+) in all settings. As for the choice of M, we see that performance increases as M grows which is expected. Hence, in practice, a good strategy is to select M as big as the compute budget permits which would alleviate an expensive grid search over M. We added these additional results to the revision (Appendix 8.10.2, Table 4).",
" *I do wonder though if it would be possible to have a proper kernel-based nonparametric model (e.g. SVM or KRR)...*\n\nWe run an additional experiment and evaluate the kernel ridge regression (KRR) model in the same task of predicting logits on the same set of UCI benchmarks. We follow the kernel classification setup closely with the difference that the model is KRR instead of Nadaraya-Watson kernel regression. For KRR, instead of the $\\sigma$ parameter, we tune the ridge parameter $\\phi$ on the logarithmic grid of 10 values from 0.01 to 100. We select the best $\\phi$ value on the validation performance. The results are as follows:\n\nDataset | TrigRF | PosRF | GERF | PoisRF | GeomRF | OPRF | PoisRF+ | GeomRF+\n\nabalone | 10.1 | 21.4 | 21.9 | 23.2 | **25.1** | 21.8 | 16.3 | 13.6\n\nbanknote | 38.7 | 99.1 | 99.8 | **100.0** | **100.0** | 99.7 | 90.8 | 99.2\n\ncar | 36.1 | 70.7 | **70.8** | 34.0 | 39.0 | 70.5 | 62.7 | 67.4\n\nyeast | 15.9 | 49.2 | 51.8 | 39.4 | 52.2 | **52.6** | 5.2 | 20.8\n\ncmc | 34.1 | 46.6 | 47.3 | 44.9 | **49.4** | 47.8 | 37.2 | 40.8\n\nnursery | 27.5 | **58.5** | 57.4 | 30.0 | 31.0 | 57.7 | 41.0 | 46.6\n\nwifi | 13.7 | 97.2 | **98.1** | 92.7 | 97.1 | 97.3 | 36.1 | 81.6\n\nchess | 11.0 | 17.2 | 16.9 | 12.5 | 13.7 | **17.4** | 12.7 | 16.3\n\nAverage | 23.4 | 57.5 | 58.0 | 47.1 | 50.9 | **58.1** | 37.8 | 48.3\n\nWe observe that, again, OPRF shows the best average performance which is also slightly better than for the kernel regression model (58.1 against 57.8, Table 1). We added these results into the revised paper (Tables 7, 8).\n\n*Comparison to other (than Performers) efficient Transformers in Sections: 5.3.1, 5.3.2 and 5.3.3*,\n\n*Long Range Arena*\n\nThank you very much for your suggestions and comments. Following Reviewer’s comment, We run experiments on 3 LRA datasets (ListOps with 2K tokens, Retrieval with 4K tokens, Image with 1K tokens). We have reported the results in the Appendix: Section 8.11. As we see in Table 18, FAVOR++ improves on the performance of Performer for: ListOps (where the accuracy improves from 36.00 to 42.65) and Retrieval (where accuracy improves from 53.82 to 60.40). \nOn both ListOps and Retrieval datasets we get *the best performance* when compared to *11* other Transformers’ architectures: regular softmax Transformer [55], Synthesizer [52], Sinkhorn [53], Sparse Transformer [10], Reformer [32], Local Attention [43], Longformer [2], Linformer [58], BigBird [64], LinearElu [31] and Performer [15].\n\nWe also run experiments for the other two LRA tasks, namely: Imdb reviews (4K tokens) and Pathfinder (1K tokens). We did not put the results for these two tasks in Table 18 since we did not manage to reproduce for them the numbers reported in “Long Range Arena: A Benchmark for Efficient Transformers” (probably due to the hyperparameter setup discrepancy; this was not the case for the other three tasks which are reported in Table 18). We want to emphasize that for these two tasks FAVOR++ provides substantial improvements over Performer.\n\nWe also would like to note that even though the field of scalable Transformers is an impactful application of our methods, the main topic of this work is not the class of efficient Transformers (nor new efficient attention techniques), but more accurate random feature (RF) methods for approximating Gaussian and softmax kernels. Furthermore, while considering scalable Transformers, the most relevant comparison is with those existing techniques for efficient attention computation that apply random feature map mechanisms for softmax kernel estimation, since our goal is to compare the effectiveness of different RF methods for Gaussian/softmax kernel estimation in downstream tasks. Thus in the experiments with Transformers, we target FAVOR+ which, to the best of our knowledge, is the only existing mechanism applying RFs for softmax kernel estimation (see: our experiments in Section 5.3.1, 5.3.2, 5.3.3). For the convenience of the Reader, we have added comparison with several Performer variants (applying different kernels) to illustrate the importance of the softmax kernel (which, as shown, is in general the most robust and which approximation is the subject of this work) as well as other Transformers (see: the above section regarding new LRA experiments). The critical comparison is the one with Performers applying FAVOR+ method for softmax kernel estimation. All experimental results clearly show that our approximators outperform this method.\n\n*I think it would be useful to have a sentence about why this is needed in the introduction being that it is one of the main motivating factors for the paper as discussed in lines 40-41*\n\nWe add a sentence in lines 35-36 in the revision: \"Positive random features guarantee that the denominator in self-attention is a sum of positive numbers, hence it cannot be negative or too small.\"\n\n*In Line 59, a − sign is missing within the Gaussian kernel formula*\n\nFixed in the revision, thank you very much for pointing this out !",
" We would like to sincerely thank the Reviewer for all the comments.\n\n*I wonder if it is possible to get positive RFs outside of this domain…*\n\nThank you very much for the question. While the case of real A, s = 1 is very elegant theoretically and results in substantial improvements in practice, it is unclear how to optimize variance of GERFs (Equation 6) in the most general case and we leave it to future work. Also, it remains unclear whether it is possible to get positive-valued random features when A is complex, since random features become complex-valued and we need to make sure that both real and complex parts have positive signs for all omegas, x and y, which is non-trivial.\n\n*...the various squared norm terms present in it are replaced with their expectations over the dataset. Does this affect the theoretical analysis given for the OPRFs in Section 4?*\n\nTheorem 4.1 only relies on the fact that A is real which always holds for OPRFs even when we use average statistic. Theorems 4.2, 4.3 further rely on the fact that A is a negative real number which also holds when || x + y ||^2 is substituted by its average since the average is positive. We added these clarifications in the revision by slightly modifying Theorem 3.3 and Section 4.\n\n*It was unclear to me which method GERF refers to in Section 5.1 and 5.2.*\n\nGERF refers to a method when A is a complex number and s can be +1 or -1. In this general case, we do not know a closed form optimum for (A, s), hence we use numerical minimization for the variance expression (Equation 6) where we use average statistics (Equation 8) instead of || x + s y ||^2. We mention the details of numerical minimization in Appendix 8.10.1: \"We use… two L-BFGS-B routines of 50 iterations to minimize A in GERF for s = -1 and +1 respectively\". We added a clarifying sentence about that in the revision (Appendix 8.10.2, Experiment details).\n\n*In Figure 2, does OPRF refer to the i.i.d. or the orthogonal variant?...*\n\nIn Figure 2, we compute the variance for M = 1 random feature, so we cannot use orthogonality there. In Table 1, though, we use orthogonal variants for TrigRF, PosRF, GERF and OPRF. We run an additional experiment with non-orthogonal (i.i.d. Gaussian) omegas and obtain the following results (notation: non-orthogonal accuracy / orthogonal accuracy):\n\nDataset | TrigRF | PosRF | GERF | OPRF\n\nabalone | 12.0 / 12.0 | 15.5 / **16.0** | **17.7** / 17.0 | 16.7 / **17.1**\n\nbanknote | 66.2 / 66.2 | **83.9** / 83.4 | 93.2 / **92.4** | 92.3 / **92.6**\n\ncar | 66.3 / 66.3 | 68.9 / **69.2** | 70.5 / **70.9** | **69.9** / 69.5\n\nyeast | 29.7 / 29.7 | **34.6** / 34.4 | 42.8 / **42.9** | 42.1 / **44.4**\n\ncmc | 46.6 / 46.6 | 44.7 / **45.1** | 47.4 / **47.8** | **47.3** / 46.3\n\nnursery | 31.3 / 31.3 | 73.2 / **77.4** | 63.8 / 63.8 | 75.8 / **78.9**\n\nwifi | 15.2 / 15.2 | 84.6 / **88.8** | 93.0 / **93.3** | 92.1 / **93.3**\n\nchess | 16.5 / 16.5 | 19.6 / **20.2** | 20.4 / 20.4 | **20.4** / 20.2\n\nAverage | 35.5 / 35.5 | 53.1 / **54.3** | 56.1 / 56.1 | 57.1 / **57.8**\n\nWe observe that the orthogonal variant does not harm, or improve the results in most cases. Remarkably, two positive-valued random features (PosRF and OPRF) benefit from orthogonality when averaged over all benchmarks. This agrees with our theoretical findings that orthogonal features are more suited for positive-valued features (see Note under Theorem 4.1). We added these additional results to the revision (Table 6).\n",
" We would like to sincerely thank the Reviewer for all the comments.\n\n*Theorem proofs:*\n\nThanks for your thorough pass through the appendix and for concrete suggestions on presentation improvement. We have significantly changed the proofs of Theorems 3.1, 3.2, 3.4, 3.5 in the revision by adding many clarifications and de-densifying chains of equations. We hope this addresses the concern about readability. We would be happy to address more comments/ideas related to clarity of proofs during the discussion period.\n\n*Writing … given to an expert in the field:*\n\nThank you very much for the comment. In the camera-ready version we will de-densify the paper and add paragraphs providing additional intuition, including: (a) explanation why positive random features are particularly useful for training implicit low-rank-attention Transformer models as well as: (b) sketches of the proofs listing the main conceptual ideas.\n\n*Possible extensions of the presented results:*\n\nThank you for the comment. We believe that our paper leads to several new exciting questions, in particular regarding the optimal variants of the positive random feature mechanisms, as mentioned by the Reviewer and explained in the paper. This undoubtedly will be the subject of our future work.",
" *References to prior results:*\n\nWe have cited in the paper several works on random features, in particular the trilogy of the foundational papers of the field ([30,40,41], l.20). Altogether, we cite 17+ papers on random features, including theoretical results (e.g. the quality of RF-based approximation for kernel regression, QMC variants of RFs and more) and practical ones (e.g. random vs Nystrom features). Following Reviewer’s comments, we will include additional references with discussion, in particular:\n\n* Random Feature Attention, Peng et al., ICLR 2021,\n* Implicit Kernel Attention, Song et al., AAAI 2021,\n* Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond, Liu et al. 2021. \n\n*Incremental improvement as compared to FAVOR:*\n\nWe emphasize that our new RF-mechanisms can be applied beyond the FAVOR setting (implicit Transformers), in particular whenever RFs for a Gaussian/softmax kernel are applied. This includes (in addition to scalable Transformers): kernel ridge regression, SVM methods, Gaussian processes, Predictive State Recurrent Neural Networks and more.\n\n*...can you please comment on the resulting kernel that they approximate?*\n\nAll random features discussed in the paper approximate the Gaussian kernel – we mention that in line 82 for TrigRFs and PosRFs, line 114 for GERFs and line 152 for DIRFs.\n\n*Is there any difference to approximation or numerical properties of random features from Eq. (4) for different values of constants?*\n\nArbitrary values of A, B, C, D, s may not result in the approximation of the Gaussian kernel. Theorem 3.1 gives the conditions when GERFs are an unbiased approximation of the Gaussian kernel. As we observe in Theorem 3.1, B, C, D can be expressed through A, s which are free parameters. Different values of A, s result in a different variance of the estimator and also whether the features are positive/non-positive and bounded/unbounded. We optimize A, s to reduce the variance while the approximation remains unbiased and features are positive-valued. For the optimum (OPRF) we find that random features are also bounded which is an important property to get tight concentration results (Theorem 4.2).\n\n*In experiments for Figure 2, can you clarify how the variance is computed? Is this at the level of Gaussian kernel or the softmax kernel in Transformers?*\n\nFigure 2 illustrates variance for the Gaussian kernel estimation – we emphasized that in line 217 in the revision. The variance for a given pair of x and y is computed based on the analytic formulas (Equations 6, 11, 13). \n\n*How numerically stable are features in Section 3.2.2 (geometric) given that there is factorial in the denominator?*\n\nWe normalize features so that their magnitude is bounded by one by first computing everything in logarithms and then subtracting the maximal random feature value. This doesn't affect computations since the same random features appear in the numerator and denominator. We use `scipy.special.loggamma` for computing factorial logarithms, analogous functions exist in the deep learning software (Tensorflow, Pytorch, JAX).\n\n*Why is FAVOR+ failing dramatically on \"Uptraining\", Figure 4?*\n\nThank you very much for the comment. We believe the performance gap in the Uptraining setting (Figure 4) is due to the lower variance of FAVOR++ compared to FAVOR+. Low variance is more important in the Uptraining scenario compared to Scratch, since the approximation should be backward compatible with exact attention. This explains why the gap is bigger than in the first scenario (Scratch).\n",
" We would like to sincerely thank the Reviewer for all the comments.\n\n*Discussion of the prior work regarding poor performance of the original Fourier random features (RFs) for Transformers and the need of positive RFs:*\n\n*Could you review arguments from prior work that would motivate positive random features?*\n\nThank you very much for the comment. To the best of our knowledge, this observation was established in [13], which we refer to several times in the manuscript (as early as in l.5 of the Abstract). Furthermore, in l.100-104 we explain why positive RFs are particularly useful for Transformer training. In the Transformer setting, very often many attention-matrix entries are close to zero. If RF-mechanisms that do not provide positiveness are applied, they can lead to approximators taking negative values for such small entries. This might lead in principle to approximate partition functions (defined as the sums of attention-matrix entries in rows), used to renormalize attention, taking negative values and consequently very unstable training. \n\nThe experimental analysis of stability of trigonometric random features is done in [13] – compare training curves in Figure 5 (right), training with trigonometric features is very unstable. Also, see Figure 16 (left), training with trigonometric features (\"cos\") halts with NaNs very quickly.\n\nFollowing Reviewer’s suggestion, in the camera-ready version we will extend the l.100-l.104 text-block to a full paragraph, providing additional details and intuition. \n\n*Detailed ablation study that would contrast classical random features to the proposed ones relative to a number of hyper-parameters would be useful:*\n\nNote that the optimal parameters of OPRF (FAVOR++ in the context of Transformers) can be computed in O(1) time (equation 7) and don't require grid search. The main hyperparameter is M, the number of random features. Here we report ablation results over M for the kernel classification experiment:\n\nM | TrigRF | PosRF | GERF | PoisRF | GeomRF | OPRF | PoisRF+ | GeomRF+\n\n16 | 35.5 | 46.9 | 47.4 | 49.0 | **49.1** | 48.5 | 41.3 | 43.2\n\n32 | 35.5 | 50.5 | 51.2 | 52.2 | **52.6** | 51.8 | 44.1 | 46.5\n\n64 | 35.5 | 51.3 | 54.0 | 54.2 | 55.2 | **55.4** | 47.0 | 50.0\n\n128 | 35.5 | 54.3 | 56.1 | 55.5 | 56.8 | **57.8** | 49.9 | 52.3\n\n256 | 35.5 | 55.6 | 58.1 | 56.5 | 57.8 | **59.7** | 51.9 | 55.0\n\nWe see that OPRF consistently outperforms the baselines (TrigRF, PosRF) and also outperforms or is competitive with other methods proposed in the paper. Further, OPRF shows the best performance among positive-valued random features (PosRF, OPRF, PoisRF+, GeomRF+) in all settings. As for the choice of M, we see that performance increases as M grows which is expected. Hence, in practice, a good strategy is to select M as big as the compute budget permits which would alleviate an expensive grid search over M. We added these additional results to the revision (Appendix 8.10.2, Table 4).\n\n*Explanation of the benign learning setting in linear regression with random features using the positive variant:*\n\nThank you for the comment. Good performance of OPRF in the benign kernel classification experiment can be explained not only by positivity, but by the fact that it results in the smallest variance of the approximation. Here we compute the average log-variance of the approximation among all 8 UCI benchmarks we use in the paper:\n\nTrigRF | PosRF | GERF | PoisRF | GeomRF | OPRF | PoisRF+ | GeomRF+\n\n-0.8 | -0.0 | -20.5 | -0.7 | -7.0 | -20.5 | 36.2 | -16.7\n\nWe observe that OPRF has the smallest variance with the same value only for GERF (which can be explained since GERF extends OPRF but is complex-valued, hence we need to use 2 times less features for the comparable amount of computation). The smallest variance for OPRF can be explained by the input data distribution due to which OPRF's variance (first equation in the proof of Theorem 8.4) is smaller in average than other variances (Section 2.2 and Equations 11, 13). We added these additional results to the revision (Appendix 8.10.2, Table 5).",
" We appreciate all the Reviewers’ time and comments. We have provided a response to all questions from each Reviewer. In line with our responses, we have updated our paper in the new revision on OpenReview. All new changes are temporarily highlighted in violet for convenience. Here we summarize the main changes:\n\n* Presentation improvements/typo fixes in the Introduction, Sections 2.1, 3.1, 4, 5.1.\n\n* Proofs of Theorems 3.1, 3.2, 3.4, 3.5 are rewritten with many clarifications added and equations de-densified.\n\n* New experimental results, ablation study and clarifications are added in Appendices 8.10.1, 8.10.2.\n\n* Comparisons with efficient transformers on Long Range Arena tasks with sequences of length up to 4K added in Appendix 8.11.\n",
" The paper proposes a novel class of random features and provides concentration bounds on the approximation properties for the resulting kernel matrices. The main motivation is to have positive random features because the classical Fourier features have not been performing well in Transformers due the fact that they can take negative values. The latter insight comes from prior work and it could have been discussed in more details here. #### - originality\nThe work builds on existing insights and aims at alleviating some issues in approximating self-attention operators in Transformers using random features. The theoretical results appear to be novel and original, though.\n\n#### - quality\nIt is theoretically sound paper with bounds on the approximation properties of a novel class of random features. It is difficult to tell how sharp these bounds are, (e.g., Theorems 4.2 and 4.3) but the empirical results indicate significant performance improvements over baselines.\nA detailed ablation study that would contrast classical random features to the proposed ones relative to a number of hyper-parameters would be useful and it is missing. For instance, I do not see why would one have more benign learning setting in linear regression with random features using the positive variant.\n\n#### - clarity\nThe paper is mostly clear, but lacks more detailed references to results from prior work. It builds strongly on FAVOR+ algorithm and findings from that paper. However, the review of that work is limited and one would need to read it to completely grasp the motivation.\n\n#### - significance\nThe approach can bring speeds up when computing self-attention mechanisms and appears to have better theoretical and empirical properties compared to prior FAVOR algorithm. However, this improvement is mostly incremental and is likely to have a limited impact on this class of models.\n lines 87-90: when reviewing the features for positive RFs, can you please comment on the resulting kernel that they approximate? The motivating example started with a Gaussian kernel but positive features do not appear to approximate it (or at least this is not trivial to see).\n\nIs there any difference to approximation or numerical properties of random features from Eq. (4) for different values of constants? What is the significance of Theorem 3.1 in this regard?\n \nIn experiments for Figure 2, can you clarify how the variance is computed? Is this at the level of Gaussian kernel or the softmax kernel in Transformers?\n \n How numerically stable are features in Section 3.2.2 (geometric) given that there is factorial in the denominator? \n \n Could you review arguments from prior work that would motivate positive random features? It is not trivial to comprehend why such a constraint would be instrumental for the approximation scheme?\n \n Why is FAVOR+ failing dramatically on \"Uptraining\", Figure 4?\n No ethical concerns.",
" The paper aims at suggesting an effiecent and unbiased estimator of the softmax/Gaussian kernel with positive bounded random features.\nHence, they suggest a mechanism, namely, optimal positive random features (OPRFs)t, and a new set of random features (RF) mechanisms, namely chefs’ random tables, that do not apply trigonometric functions.\n\nTheorems 3.1 and 3.2 by the authors show that they can give a significant variance reduction for Gaussian/softmax kernel estimation by computing the variance of OPRF-based estimators. Theorem 4.2 by the authors use the boundedness of OPRFs to provide the first exponentially small upper bound for the tails of the Gaussian/softmax kernel estimators reliant on positive RFs.\nAs a result, the first uniform convergence findings for the softmax attention approximation with positive RFs utilizing OPRFs are given in Theorem 4.3.\n\nIn contrast to RKSs (Random Kitchen Sinks - techniques that rely on trigonometric nonlinear mappings to approximate shift-invariant), which only asymptotically reduce variance for sufficiently big dimensionality d, the authors demonstrate that orthogonal random projections paired with OPRFs provably reduce the variance of OPRFs for any dimensionality d (see Theorem 4.1). \n\nExperimental results are also provided. Strengths\nIt seems to be a very novel work with a solid theory, the numerical part is also sufficient.\n\nWeaknesses\nMy only comments are regarding the writing, as someone that is not an expert nor very familiar with the community it was not easy for me.\n\n1. When you write a theoretical paper (or a paper with theorems and proofs), the theory must be fully detailed. Many derivations are not clearly explained, e.g., in equation 16 in the appendix, the authors should explain that the last derivation holds by opening the term (x+sy)^2 and another, what about the one before?\nIn (19)- part of Proof of Theorem 3.2, many details are missing explanation, it took so much time to understand why things hold, and still, some things are missing for me. Same for Proof of Theorem 3.5, Proof of Theorem 3.2, and others. \n\n2. The writing in general, is given to an expert in the field, with some additional intuitive explanations, and details - > as someone who is not an expert in this field, it took me too much to understand what is going on, with additional external reading.\n\n\n Please see my comment regarding the proofs in the appendix. It will be very helpful to explain the derivations done in the proofs.\n\nAs I wrote, my only comments are regarding the writing, as someone that is not an expert nor very familiar with the community it was not easy for me, and thus Im very eager to see the other reviewers' comments and authors' responses. \n\n I don't consider these as limitations but, as detailed by the authors, it is possible to expand on a few of the techniques discussed in this study, which might result in algorithms that are even more precise.\nIt is still unclear how to select the theoretically ideal settings for the Generalized exponential RFs system.\n",
" The paper introduces Chefs' Random Tables (CRTs), a family of random features (RFs) for unbiased low-rank approximation of attention matrices in Transformer modules with the ultimate goal of overcoming the quadratic time complexity bottleneck associated to computing full-rank attention matrices. Previous random feature methods for this problem mentioned by the paper have either lacked:\n- positiveness [2], resulting in numerical instabilities during the computation of the attention matrix normalization factor (the inverse of the row sums, line 102).\n- boundedness or even possessing a finite moment-generating function [3], resulting in high variance estimators and inapplicability of Hoeffding or Chernoff-type inequalities for establishing tail-bounds.\n\nThese are the main desiderata that the paper sets out to achieve while maintaining unbiased approximation.\n\nThe proposed CRT family encompasses two new classes of random features: \n\n 1) Generalized Exponential Random Features (GERFs), which as the name says, generalizes previous random features defined using exponential functions and Gaussian random projections, and contains as a special case the trigonometric RFs of [1,2] (also often referred to as Random Fourier Features), and the positive RFs of [3] for approximating attention matrices;\n\n2) Discretely Induced Random Features (DIRFs) that lift a discrete probability distribution to a random feature map by making use of the Taylor expansion of the exponential function. \n\nAs a special case of 1), the authors introduce Optimal Positive Random Features (OPRFs) by minimizing the random feature variance with respect to the degrees of freedom of the GERF class over (a subset of) their domain. As an instantiation of 2), the authors propose Poisson Random Features (PoisRFs) and Geometric Random Features (GeomRFs). Explicit formulas are given for computing the variances of each of these techniques.\n\nAdditional variance reduction is achieved by replacing the Gaussian random projections in 1) by block-orthogonal ensembles of random projections as done in [2] and [3]. When combining this technique with OPRFs for attention matrix estimation, the resulting methodology is called FAVOR++ for which a variance bound is established in Theorem 4.1.\n\nFurther on the theory side, exponential tail bounds are given for OPRFs both with i.i.d. and orthogonal random projections in Theorem 4.2. In Theorem 4.3 a uniform convergence result is given for the first time for approximating attention matrices with positive random features, specifically for the i.i.d. variant of OPRFs.\n\nExperiments are carried out on:\n- synthetic data, comparing the variance of the proposed random features with the previous techniques;\n- UCI classification problems by using single attention layer as a mapping from input to class membership probability;\n- NLP and speech modelling benchmarks;\n- Vision Transformers for testing the performance on a long-range dataset.\n\nThe experiments overall demonstrate the benefit of some of the proposed features on these kinds of learning problems.\n\n[1] Rahimi, Ali, and Benjamin Recht. \"Random features for large-scale kernel machines.\" Advances in neural information processing systems 20 (2007).\n\n[2] Choromanski, Krzysztof, et al. \"Masked Language Modeling for Proteins via Linearly Scalable Long-Context Transformers.\" arXiv preprint arXiv:2006.03555 (2020).\n\n[3] Choromanski, Krzysztof Marcin, et al. \"Rethinking Attention with Performers.\" International Conference on Learning Representations. 2020.\n The writing of the paper is clear, and in my opinion it reads relatively well as being somewhat well-versed in both random feature kernel approximations and Transformer models. Although the core theme of the paper is not exactly novel, as it is an extension of previous works, the ideas presented are quite original and interesting.\n\nThe paper seems quite strong from a theoretical perspective with various results given for the proposed techniques, although I have not checked proofs in detail. The authors claim that so far no uniform guarantees have been established for positive RF based approximation of attention matrices, which according to my knowledge is true. This should make the paper an interesting contribution to theoretical research on attention layers and Transformer modules. Overall, the theoretical aspect is quite sound, although I do have some questions regarding the practicalities (see Questions).\n\nThe main weakness of the paper in my opinion is the range of experiments, such as choice of competing methods and lack of long-range datasets. A strong part of the experiments, however, is that it is demonstrated that the proposed RFs allow backwards compatibility for fine-tuning (uptraining) transformer models pretrained with full-rank attention, which should make it possible to scale and extend models pretrained on smaller datasets to much larger datasets by uptraining with the proposed low-rank technique.\n\nTo summarize, the paper is quite interesting theoretically, and the experiments support the benefits of the approach, but I do believe it would benefit from some further comparisons to make it more convincing for practitioners, see the Questions section for more details. Theory:\n- In Theorem 3.3, the minimization problem is restricted to the range when $A$ is real and $s = +1$. The authors mention that this results in positive RFs. I wonder if it is possible to get positive RFs outside of this domain, and if so, do the authors think there could be any significantly lower variance estimators achieved by lifting the domain constraint (potentially highlighting future work)?\n- If I understand correctly, the $\\rho^\\star$ computed in eq.(7) is not what is used in practice when a full matrix is required and not only a single kernel evaluation, but the various squared norm terms present in it are replaced with their expectations over the dataset. Does this affect the theoretical analysis given for the OPRFs in Section 4? In particular, if Theorems 4.1 and 4.2 still hold when $\\rho$ is not chosen as being optimal for pairs $\\mathbf{x}$ and $\\mathbf{y}$? For Theorem 4.3, is it assumed that $\\rho$ is chosen this way or some other way? \n\nExperiments:\n- It was unclear to me which method GERF refers to in Section 5.1 and 5.2. According to Theorem 3.1 and the definition above it, it has two degrees of freedom, $A$ and $s$. Was it mentioned how these were chosen and why? \n- In Figure 2, does OPRF refer to the i.i.d. or the orthogonal variant? Regardless, it would be interesting to show both so as to see whether the orthogonal projections really do provide variance reduction in practice as justified theoretically by Theorem 4.1.\n- In Section 5.2, the prediction model used is an attention lookup over the one-hot encoded class labels of training data with testing input as query and training inputs as keys. I understand that the motivation for doing this is to stay within the theme of attention models. I do wonder though if it would be possible to have a proper kernel-based nonparametric model (e.g. SVM or KRR) to demonstrate whether the proposed RFs could be useful for approximating the Gaussian kernel in kernel-based learning tasks? As the theoretical aspect of the paper could be interesting for the wider kernel community, it might be helpful to have some experiments in this direction as well.\n- In Sections 5.3.1, 5.3.2 and 5.3.3, I was somewhat missing more alternative methods, since only Performer variants are presented as linear Transformer baselines. There is a wide range of efficient Transformers, and it would be good in my opinion to compare against at least some of these. The optimal method may of course depend on the choice of dataset, see next.\n- As it is demonstrated in Figure 4 (Right), full-rank attention is feasible until around 1000 steps, hence the main computational benefits of linear complexity methods only become apparent outside this range. However, although computations can still be feasible, such models can fail to retain long-range interactions. The Long Range Arena benchmark [4] seems to be popular in the community for testing linear complexity Transformers on long-range problems. It would probably increase the impact of the paper if potentially strong results can be achieved on these benchmarks. If it does not perform that well, this experiment could still be of value by potentially pointing out some practical drawbacks of the model, which could further be investigated and explained.\n\nRemarks & typos:\n- Early on in the introduction it was mentioned that positive RFs are required for Transformers as in FAVOR+. The answer to why this is the case was only next provided at the end of page 3. I think it would be useful to have a sentence about why this is needed in the introduction being that it is one of the main motivating factors for the paper as discussed in lines 40-41.\n- In Line 59, a $-$ sign is missing within the Gaussian kernel formula\n\n[4] Tay, Yi, et al. \"Long Range Arena: A Benchmark for Efficient Transformers.\" International Conference on Learning Representations. 2020. The authors have adequately discussed this.",
" \nIn this paper, the authors propose two alternative schemes of positive random features for Gaussian (Softmax) kernel approximation. i.e., the Generalized exponential RFs and the Discretely-induced RFs. Variances for both approximations are given. The authors further show the variance reduction of the OPRFs (a special case of GERF) with orthogonal samples holds for any $d>0$ instead of asymptotically only large $d$ in literature. A uniform convergence result is proved for (low-rank transformer) attention approximation. \nPros.\n1. The proposed OPRF scheme seems promising for constructing positive and bounded random features, which may have the potential for accelerating Transformer while maintaining stable training.\n\n2. In the experiments, the variance reduction is surprisingly significant (up to $e^{60} times). \n\n3. The paper is well organized and well presented.\n\nCons. \n1. It seems that the discretely-induced RFs are sensitive to outliers because the requirement $c_l= min_i min ( \\boldsymbol{x}^{(i)} ,\\boldsymbol{y}^{(i)} ) - \\epsilon $ in Line 181. Moreover, it is not rotation invariant, i.e. the approximation change when the data distribution ratotated. Note that Gaussian kernel and softmax kernel are rotation invariant.\n\n2. Some symbols in Theorem are given without definition e.g., $C(\\|\\boldsymbol{x} + \\boldsymbol{y} \\|)$ in Theorem 4.1. \n1. Does the proposed OPRF and discretely-induced RF scheme still construct positve RFs for new data instead of the given data $\\boldsymbol{X}$? Note that $\\boldsymbol{c}$ and statistics computed from a given set $\\boldsymbol{X}$. They may change for a new data $\\boldsymbol{z}$. As $c_l= min_i min ( \\boldsymbol{x}^{(i)} ,\\boldsymbol{y}^{(i)} ) - \\epsilon $, it may be more sensitive for the new data or distribution shift.\n\n2. The variance reduction performance surprised me. However, the discussion in Section 5.1 is quite limited. Could the authors discuss more the experiment in Figure 2? Why the GERF achieves so significant gain? It seems that the baselines in the top rows are different from that in the bottom. Could the authors plot them together for better evaluation? Could the authors provide the comparison of variance w.r.t the number of features ($M$)? NA"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
3
] | [
"nips_2022_vRwCvlvd8eA",
"zMG-0xCex9S",
"i6yoGhWMi7t",
"FxNEp3oiRdN",
"QHPpS1KMXo",
"4YXrXARiVX0",
"tyBA_IGN1hv",
"nips_2022_vRwCvlvd8eA",
"nips_2022_vRwCvlvd8eA",
"nips_2022_vRwCvlvd8eA",
"nips_2022_vRwCvlvd8eA",
"nips_2022_vRwCvlvd8eA"
] |
nips_2022_WSxarC8t-T | SketchBoost: Fast Gradient Boosted Decision Tree for Multioutput Problems | Gradient Boosted Decision Tree (GBDT) is a widely-used machine learning algorithm that has been shown to achieve state-of-the-art results on many standard data science problems. We are interested in its application to multioutput problems when the output is highly multidimensional. Although there are highly effective GBDT implementations, their scalability to such problems is still unsatisfactory. In this paper, we propose novel methods aiming to accelerate the training process of GBDT in the multioutput scenario. The idea behind these methods lies in the approximate computation of a scoring function used to find the best split of decision trees. These methods are implemented in SketchBoost, which itself is integrated into our easily customizable Python-based GPU implementation of GBDT called Py-Boost. Our numerical study demonstrates that SketchBoost speeds up the training process of GBDT by up to over 40 times while achieving comparable or even better performance.
| Accept | After rebuttal, the reviewers unanimously agree that the submission should be accepted for publication at NeurIPS. Reviewers were excited about the achieved speed-up. | train | [
"n7ur2OZWO0",
"GeH7OAmVbvd",
"-9kidQ81EDY",
"EvjXWu_19-y",
"wR1OAzYnTua",
"WLlA9x4-XIs",
"tDuwNcvcELP",
"nC2IQ6xMspU",
"Vs0WlmZybuj0",
"bQbR5vN_HLK",
"gCLree-cVM",
"6CphUQHtrmQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your feedback and suggestions for improving our work, especially the experiment section! \n",
" Thank you for your feedback and for the idea to compare SketchBoost with TabNet!",
" Dear Reviewer psGW,\n\nThe reviewer-author discussion period will end on August 9. Have you had a chance to look at our answers yet? If you have any questions, we will be happy to answer them.\n\nWe also would like to bring your attention to the fact that we made the font size in Figure 1 bigger as you suggested and added details on this experiment to the Supplementary Material; see Section 2.7. Also, the typos are now fixed, thank you for noticing them. \n\nBest regards, \nThe Authors\n",
" I would like to thank the authors for their response and for clarifying the concerns and questions that were posted. \nNothing else to add on my side. ",
" Thank you for your responses,\n\nThe vast majority of my concerns have been addressed thanks to your detailed responses. Given the actual impact of the GBDT-based multioutput scenario, I would prefer my score for the paper to remain unchanged. Looking forward to SketchBoost's open source.\n\nI'm open to the opinions of other reviewers.\n\n",
" We would like to thank the Reviewers for carefully reading our paper and writing thorough reviews. We are pleased that there is no doubting that Gradient Boosting is widely-used and any improvement of this algorithm is of high practical interest. We are also delighted that reviewers find our work theoretically sound and appreciate its significance.\n\nWe would like to take this opportunity to emphasize that none of the big GBDT implementations properly support multi-output problems, and we hope that our work can positively affect this situation.\n\nBelow we respond to each reviewer separately. Please let us know if you have additional questions or comments.\n",
" We thank Reviewer psGW for the thorough review and for noting the soundness of our results and the good presentation of the paper. Below we give detailed responses to the main concerns raised in the review.\n\n**The empirical results are presented \"for the best k\" … and this seems to imply parameter tuning on the test set**\n\nThank you for this comment. We regret that our presentation gives the impression that we propose to imply parameter tuning on the test set. Some of the empirical results are given “for the best k” only for the reason to summarize many experimental data into simple tables. Therefore, we would like to emphasize that (1) choosing k does not require iteration through a grid or using the test set, (2) the final performance of SketchBoost does not vary much in k; it can be seen, e.g, in Figure 2, (3) results for all values of k are given in the Supplement.\n\nRegarding the value of k, in practice we recommend using a predefined value k=5. This choice is based on our numerical study which shows that there is a wide range of values of k for which our methods work well. It is common in GBDT: modern toolkits have more than 100 hyperparameters, and most of them are not usually tuned (default values typically work well). Nevertheless, it is also possible to add k to parameters that are tuned (on the validation set, of course). In our view, an additional hyperparameter will not play a significant role here taking into account that hyperparameter optimization is usually done using the random search or Bayesian optimization.\n\nWe will clarify this in the revision.\n\n**… the proposed method should probably be compared to other, more generic approaches**\n\nThank you for bringing this topic up as it allows us to specify the place of our work in the literature. Thank you also for the suggested references, we will include them and the discussion below to the Related work section. \n\nLet us emphasize first that none of the approaches from the references combines the following four advantages of SketchBoost: (1) it is applicable to GBDT, (2) it speeds up the training process and does not drop down the quality, (3) it allows one to use any loss function, and (4) it does not rely on any specific data assumptions (e.g., sparsity or class hierarchy) or the problem structure (e.g., multi-label or multi-class). \n\nIn more detail, existing approaches to multi-target problems usually fall into one of the following two categories: algorithm adaptation (AA) or problem transformation (PT). PT approaches reduce the number of targets using some compression techniques. Most of the mentioned papers ([2,4,5,6]) fall into this category and mainly differ in the choice of compression and decompression techniques. They pay a price in terms of prediction accuracy due to the loss of information during the compression phase. As a result, they do not consistently outperform the full baseline (e.g., see Section 2 in [7]). The PT methods are also not generic in the sense that they significantly rely on the problem structure or data assumptions. E.g., the method from [5] does not generalize directly to multi-label classification, the methods from [2,4] — to multi-task regression. The methods based on compressed sensing ([2,4]) can be applied to dense data, but this will affect the performance. Finally, PT methods usually decode the predictions directly to 0/1 but not to probabilities which are required sometimes in practice.\n\nMost PT approaches can be easily combined with SketchBoost. Namely, one can (1) make an encoding step beforehand to utilize sparsity or label similarity, (2) apply SketchBoost to a dense dataset, and (3) decode SketchBoost’s outputs to obtain the final predictions. Moreover, one can play with compression level in the encoding step and sketch dimension in SketchBoost to obtain the best possible performance. \n\nThe only paper devoted to AA is [3]. The authors there use random projections to speed up random forests in the context of multi-label classification. The main difference from our work is that we consider GBDT and have no limitations on task type or loss function. \n\nFinally, the authors of [1] consider multi-target regression and construct new targets using random projections of existing ones. This idea is orthogonal to ours since it increases the number of targets (in order to improve performance of an algorithm).\n\n*References* \n[1] Multi-target regression via random linear target combinations. \n[2] Multi-label prediction via compressed sensing. \n[3] Random forests with random projections of the output space for high dimensional multi-label classification. \n[4] Multilabel classification using bayesian compressed sensing. \n[5] Learning compact class codes for fast inference in large multi class classification. \n[6] A nonlinear label compression and transformation method for multi-label classification using autoencoders. \n[7] FastXML: A fast, accurate and stable tree-classifier for extreme multi-label learning.",
" We thank Reviewer B4Dg for the thorough review and the vote to accept the paper. We find it encouraging that our empirical results sound convincing and are appreciated by the reviewer. Below we give answers to the comments and questions raised in the review.\n\n**Q1: Random Projection appears to routinely outperform the other two methods … the paper can be improved if the random matrix used in projection can be adaptive...**\n\nGreat question! Certainly, Random Projection outperforms Top Outputs and Random Sampling on the vast majority of datasets. However, there are cases where Random Projection shows slightly worse performance than other approaches; see, e.g., the results for Delicious in Figure 1 in the Supplement. Therefore, if one has sufficient resources and model performance plays an important role, we would recommend testing all three methods. If the resources are limited, according to our numerical study, it is better to use Random Projection. \n\nRegarding the adaptive projection, we hope that we have understood this comment correctly and what is written below answers the question. We recommend a predefined value k=5 (please see our response to Reviewer psGW). One can choose k as a fraction of the output dimension, but our experiments show that in general there will not be much difference in the performance. If your question was about choosing k adaptively at each boosting iteration, then it is a challenging open problem and we do not have a good solution yet. We had an idea to choose the sketch dimension using the error bounds given in Section 1 in the Supplement. However, it is very time-consuming since, to estimate the error at each boosting iteration, one needs to compute singular values of the gradient matrix (which has a quadratic complexity in the output dimension). \n\n**Q2: The sensitivity analysis of sketch dimension k indicates that reducing dimensions can result in distinct performance patterns... will there be a recommendation for selecting k?**\n\nThank you for this comment! Reducing the sketch size certainly can result in distinct performance patterns. Loosely speaking, our methods work similarly to regularization. Depending on the dataset, different values of the sketch size k may be optimal. For example, Figure 2 (in the main text) shows that k=1 is optimal for Random Projections on Dionis, but on SF-Crime or MoA, k=20 performs better. The positive side of our experiments is that our methods work well for a wide range of values of k, which means that one can take simply k=5. However, it is also possible to add k to hyperparameters that are tuned. In our view, k will not play a significant role here taking into account how many hyperparameters boosting frameworks have and that hyperparameter optimization is usually done using the random search or Bayesian optimization. \n\n**C1 and C2: ... some baseline could be considered (e.g. LightGBM according to [1], a Deep learning-based solution [2][3])... related discussion connecting to deep learning methods would be better.**\n\nThank you for bringing this question up since it allows us to emphasize that any improvement in GBDT is of high practical interest. The recent surveys ([1,5]) that discuss what solution for tabular data is better, neural networks or GBDTs, conclude “that algorithms based on gradient-boosted tree ensembles still mostly outperform deep learning models on supervised learning tasks”. Hence GBDT is still one of the most powerful tools for solving problems with tabular data.\n\nWe have not compared SketchBoost to Neural Networks because it is not common to use DL approaches as baselines in the GBDT literature and an exhaustive comparison between existing DL approaches and GBDTs deserves its own investigation and is worthy of future work. Nevertheless, we liked the idea of having a DL-based solution in the experiments, so we added TabNet [3] as a baseline. The results and experiment details are available in the Supplement, Section 2.6 (also at https://sites.google.com/view/sketchboost/ for your convenience). They confirm the conclusion made in the surveys — GBDTs outperform TabNet. \n\nRegarding LightGBM, the reason why we have not considered it as a baseline is that it uses the same multiouput strategy as XGBoost (one-vs-all), performs pretty similar to XGBoost (see, e.g., Table 5 in [1]), and is not so efficient on GPU as XGBoost (see, e.g., [4]). \n\n*References*\n\n[1] Deep Neural Networks and Tabular Data: A Survey. \n[2] VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain. \n[3] TabNet: Attentive Interpretable Tabular Learning. \n[4] Benchmarking and Optimization of Gradient Boosting Decision Tree Algorithms. \n[5] Are Neural Rankers still Outperformed by Gradient Boosted Decision Trees?\n\n—\n\nThank you for your questions and important comments! We will add as much of our discussion as is possible in the camera-ready version if the paper is accepted (and we have an additional content page).\n",
" We thank Reviewer 1K6v for the comprehensive review and the vote to accept the paper. We are also grateful for summarizing the strengths of our work, in particular our analytical and empirical contributions, as well as noting the significance of our results. Below we give responses to the comments and questions raised in the review. \n\n**Conclusions section: I am missing a conclusions section where you summarize the goal of the paper and the findings.**\n\nThank you for this comment! We apologize for not having the Conclusion section in the submission and agree that it is important to summarize the findings at the end of the paper. We will add this section in the camera-ready version if the paper is accepted (and we have an additional content page).\n\n**Table 1: The error metrics reported are after running all algorithms in the GPU right? (same as for Table 2, that is mentioned explicitly).**\n\nThank you for this question. Table 1 and Table 2 refer to the same experiment. But since the processing units do not considerably affect test errors, we decided not to include this information in Table 1 for readability purposes. We are sorry for not making this clear. If you feel that this information is needed in Table 1, we will be happy to add it in the next revision.\n\n**Would be good to clarify why in Table 4 SketchBoost Full is taking less time to run than Random Sampling and Random Projection?**\n\nThank you for this important comment! The reason for this is the following. If the dataset is small, then each boosting iteration requires little time. Therefore, when a sketching strategy is used, the speed up for each boosting iteration may be insignificant (especially because of ineffective utilization of GPU). At the same time, the number of iterations needed to convergence may be greater, which may result in an increase of the overall training time. Exactly this happened on some datasets from [Zhang and Jung, 2021]. We will clarify it in the next revision.\n\n**Why GBDT-MO Sparse, Full and CatBoost (Table 4) are on CPU?**\n\nThank you for bringing this up! You are right, the reason for this is that the GBDT-MO implementation from Zhang and Jung [2021] is available only for the CPU. And to make its performance comparable to another algorithm, we decided to run CatBoost also on CPU. This information is now added; see page 9.\n\n**A figure similar to Figure 1 but with the authors proposed algorithms would be nice to include (I haven't checked if it's part of the supplementary material)**\n\nThank you for the great suggestion! We did not have such a figure with SketchBoost. It is now given in Section 2.7 in the Supplementary Material (for your convenience, it is available also at https://sites.google.com/view/sketchboost/). We will include it in Conclusion if the paper is accepted and we have an additional content page.\n",
" The submission presents three methods to speed up split finding in multi-output gradient boosted decision tree ensembles. The best-performing method applies Gaussian random projections to compress the gradient matrix before splits are found based on the compressed matrix. Empirical results on several multi-class, multi-label, and multi-target regression problems show that this yields much faster training times while maintaining a similar level of accuracy.\n What is proposed makes sense, and the paper is well written. The empirical results are presented \"for the best k\", which determines the amount of compression, and this seems to imply parameter tuning on the test set, which would be problematic. However, it seems doubtful that the overall findings would be affected significantly by this.\n\nWhat is of greater concern is that the proposed method should probably be compared to other, more generic approaches to multi-target prediction that are based on compressing the vector of targets, e.g.,\n\nTsoumakas, G., Spyromitros-Xioufis, E., Vrekou, A., & Vlahavas, I. (2014, September). Multi-target regression via random linear target combinations. In Joint european conference on machine learning and knowledge discovery in databases (pp. 225-240). Springer, Berlin, Heidelberg.\n\nHsu, Daniel J., Sham M. Kakade, John Langford, and Tong Zhang. \"Multi-label prediction via compressed sensing.\" Advances in neural information processing systems 22 (2009).\n\nJoly, A., Geurts, P., & Wehenkel, L. (2014, September). Random forests with random projections of the output space for high dimensional multi-label classification. In Joint European conference on machine learning and knowledge discovery in databases (pp. 607-622). Springer, Berlin, Heidelberg.\n\nKapoor, A., Viswanathan, R., & Jain, P. (2012). Multilabel classification using bayesian compressed sensing. Advances in neural information processing systems, 25.\n\nCissé, M., Artieres, T., & Gallinari, P. (2012, September). Learning compact class codes for fast inference in large multi class classification. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 506-520). Springer, Berlin, Heidelberg.\n\nWicker, J., Tyukin, A., & Kramer, S. (2016, April). A nonlinear label compression and transformation method for multi-label classification using autoencoders. In Pacific-Asia Conference on Knowledge Discovery and Data Mining (pp. 328-340). Springer, Cham.\n\nThe choice of x axis in Figure 1, particularly given the small size of the font used to label the axis, may mislead the reader into thinking that dependence on the number of targets is not linear. Also, Figure 1 illustrates performance for multi-class classification. It seems multi-label classification would be a more obvious example. Finally, it is necessary to (somewhere) present details of the synthetic data and the exact configuration of the learning algorithms.\n\nTypos:\n\nThere is a broken reference on Line 110.\n\n\"on the most multiclass datasets\"\n\n\"it achieve\"\n\n\"it is order of magnitude faster\" N/A There does not seem to be an explicit discussion of limitations.",
" This paper aims to speed up the search process of the tree split in the training of Gradient Boosted Decision Tree (GBDT), especially for tasks with highly dimensional output. The authors propose three methods, Top-Outputs, Random Sampling and Random Projections, to reduce the computational complexity of the split scoring function. Extensive experiments demonstrate the effectiveness of SketchBoost in terms of performance and efficiency. Pros:\n\n[P1]: The paper is well written and organized in terms of clarity.\n\n[P2]: Extensive tests are conducted to demonstrate the efficacy of the proposed strategy. Test results demonstrate that SketchBoost can obtain results equivalent to or even superior to existing SOTA approaches. SketchBoost is an order of magnitude faster on multidimensional datasets such as Dionis (355 classes) and Delicious (983 labels).\n\nCons:\n\n[C1]: In order to deal with multi-dimensional tasks, some baseline could be considered. ( e.g LightGBM according to [1], a Deep learning-based solution[2][3])\n\n[C2]: A related discussion connecting to deep learning method would be better.\n\n[C3]: Some typo like line 110/257.\n\n\n[1] Deep Neural Networks and Tabular Data: A Survey, https://arxiv.org/abs/2110.01889\n\n[2]VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain https://proceedings.neurips.cc/paper/2020/file/7d97667a3e056acab9aaf653807b4a03-Paper.pdf\n\n[3] TabNet, https://arxiv.org/abs/1908.07442\n [Q1]: Random Projection appears to routinely outperform the other two methods; under what circumstances would you advise using the other two methods? It seems that the Random Projections is an advanced version of the other two proposed methods and the paper can be improved if the random matrix used in projection can be adaptive to the task datasets.\n\n[Q2]: The sensitivity analysis of sketch dimension k indicates that reducing dimensions can result in distinct performance patterns. A interesting discovery is that Random Projections perform optimally when k = 1. When use SketchBoost for datasets with varying output dimensions, will there be a recommendation for selecting k?\n N/A",
" The authors present three approaches to improve Gradient Boosted Decision Trees (GBDT) for multi output problems. The main goal is to speed up the training time of GBDT especially for those datasets with high number of classes. The authors' approaches focuses on a more efficient way to compute the scoring function, which maintains similar levels of performance (predictive performance). \nThe authors approaches are not a substitute for XGBoost, CatBoost or any SOTA GBDT algorithm, but rather can be integrated with any of these solutions. \nFinally, the authors present extensive experiments to validate their solution, showing significant decrease in training time without compromising predictive performance. \n I believe this paper is well-written, tackles a relevant problem, considering how often these kind of algorithms are used in industrial settings, has a solid theoretical background, which is then evaluated with extensive experiments on different datasets. \n\nIn particular, I believe the strengths of this paper are: \n- Problem that they tackle is relevant\n- The solution is built to be used on top of existing SOTA algorithms\n- SOTA algorithms are also compared to their solution, showing promising results\n- Nice description of the complexity analysis between their solution and having the standard approach (Section 3.4). \n- Nice comparison by running the baselines also on the GPU\n- Good and promising results overall. \n- Code available \n\nA few things to improve that are worth mentioning:\n- Conclusions section: I am missing a conclusions section where you summarize the goal of the paper and the findings. It reads strange and like there is something missing when the paper just ends in the last comparison of the algorithms. \n- Table 1: The error metrics reported are after running all algorithms in the GPU right? (same as for Table 2, that its mentioned explicitly). \n- Would be good to clarify why in Table 4 SketchBoost Full is taking less time to run than Random Sampling and Random Projection? I was expecting that the approaches presented by the authors would take less training time, as occurred during the comparison with CatBoost and XGBoost (Table 2). Is there a reason for this? \n- For the second experiment, the comparison against GBDT-MO sparse and full are conducted on the CPU. Same as for CatBoost. Is the reason for this because the implementations that are available for such algorithms are only for the CPU? It would be nice to clarify. Same as why in that case CatBoost is run on the CPU (I guess to make it comparable to the other two). \n- A figure similar to Figure 1 but with the authors proposed algorithms would be nice to include (I haven't checked if it's part of the supplementary material) The questions are part of the previous section (Strengths and Weaknesses). \nBut to summarize them\n- Table 1, error metrics reported of running everything on GPU?\n- Why in Table 4 SketchBoost Full is taking less training time?\n- Why GBDT-MO Sparse, Full and CatBoost (Table 4) are on CPU? I don't have anything to add here. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"EvjXWu_19-y",
"wR1OAzYnTua",
"tDuwNcvcELP",
"Vs0WlmZybuj0",
"nC2IQ6xMspU",
"nips_2022_WSxarC8t-T",
"bQbR5vN_HLK",
"gCLree-cVM",
"6CphUQHtrmQ",
"nips_2022_WSxarC8t-T",
"nips_2022_WSxarC8t-T",
"nips_2022_WSxarC8t-T"
] |
nips_2022_wOI0AUAq9BR | SizeShiftReg: a Regularization Method for Improving Size-Generalization in Graph Neural Networks | In the past few years, graph neural networks (GNNs) have become the de facto model of choice for graph classification. While, from the theoretical viewpoint, most GNNs can operate on graphs of any size, it is empirically observed that their classification performance degrades when they are applied on graphs with sizes that differ from those in the training data. Previous works have tried to tackle this issue in graph classification by providing the model with inductive biases derived from assumptions on the generative process of the graphs, or by requiring access to graphs from the test domain. The first strategy is tied to the quality of the assumptions made for the generative process, and requires the use of specific models designed after the explicit definition of the generative process of the data, leaving open the question of how to improve the performance of generic GNN models in general settings. On the other hand, the second strategy can be applied to any GNN, but requires access to information that is not always easy to obtain. In this work we consider the scenario in which we only have access to the training data, and we propose a regularization strategy that can be applied to any GNN to improve its generalization capabilities from smaller to larger graphs without requiring access to the test data. Our regularization is based on the idea of simulating a shift in the size of the training graphs using coarsening techniques, and enforcing the model to be robust to such a shift. Experimental results on standard datasets show that popular GNN models, trained on the 50% smallest graphs in the dataset and tested on the 10% largest graphs, obtain performance improvements of up to 30% when trained with our regularization strategy. | Accept | This work proposes a regularization approach (based on graph coarsening and alignment) to allowing graph neural networks to generalize across different graph sizes. The approach proposed here is simple, yet shown to be effective. While the reviewers had some concerns regarding this paper and the results in it, these were alleviated to a sufficient extent in rebuttal so that currently one reviewer outright supports acceptance, and the others lean towards acceptance. I agree with the opinion supporting acceptance of the paper, especially given the remark about the value (and rarity) of simple, clear, and effective solutions to important problems, which I agree is the case here. Therefore, I recommend accepting the paper, and I would like to encourage the authors to take into account the reviewers' comments (and the additional points included in the responses to them) when preparing the camera ready version. | train | [
"z5W_RxyKJfa",
"RM8LhCQTT1u",
"DmhjjI9YsOK",
"DiKcuaqQYA4",
"386X_MbhS9d",
"BwRiFZfXoTN",
"AGnyxYgVy6c",
"LlWK3k-Yon5",
"uGeSC5yxh3W",
"9Bxcs1YmGip",
"T6LZ8EJaoxP",
"bHTn3dS_qV-"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank the reviewer for taking the time to answer our rebuttal. We will try to address the standing concerns below.\n\n- Q1. The unattributed datasets in [6] were designed to test the theoretical assumptions behind the model proposed in [6], and in fact there is a big discrepancy (in the ranking of the model score) with the results they then have on the \"real-world\" datasets. We then focused on the real world datasets to test the performance of our method in realistic benchmark datasets. The Embedding Analysis of Section 4 in the main paper was designed to provide insights on the intuition behind our method by studying how our method affects the node embeddings generated by the models. The results show in fact that models trained with our strategy are more robust to size-shifts with respect to models trained without our strategy. CKA is a popular and powerful tool for analysing the representations of neural networks (see for example [Nguyen et al., ICLR 2021]).\n\n- Q2. Applying the L2 norm on the graph representations after readout layers would be a much weaker training signal than using a distribution-wise discrepancy metric such as CMD, which can be seen as performing multiple L2 comparison between different \"readout representations\" (one per every considered moment). In fact what is suggested by the reviewer could be seen as a special case of CMD in which we remove the second term (i.e. the summation over $k$). We can however incorporate some results in the Appendix of the camera ready version, and we thank the reviewer for the suggestion.",
" Thank you for your reply. I appreciate the effort to run additional experiments in a limited time.\n\nOverall, my concerns have been partially addressed. More details: \n\nQ1. [Partially addressed] While [6] considers the same four datasets, it also includes empirical analysis on unattributed graphs. Since the proposed method is not theoretically grounded, I believe it would require a stronger empirical assessment. The additional experiment on Deezer only partially addresses this issue.\n\nQ2. [Not addressed] I still think the authors could have considered, e.g., Wasserstein distance between the sets of node embeddings (original and coarsened graphs) or even applied l2-norm after readout layers.\n\nQ3. [Addressed] This was more a general comment than an important issue. I was not expecting authors to consider other setups given the limited time.\n\nQ4. [Addressed] My point was that reporting accuracy results would not hurt. But I have no concerns regarding MCC.\n\nQ5. [Addressed] It would be helpful to report the coarsening ratios associated with these experiments in the Appendix.",
" We would like to add that we have performed an additional experiment as suggested by the reviewer.\n\nIn fact, we have tested our method on a dataset of a different domain. In more detail we have tested it on a graph classification dataset of social networks: the Deezer dataset introduced in \"Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs\", Rozemberczki et al., CIKM 2020.\n\nThe results are shown below (results for a model trained with and without our regularization). We followed the same evaluation procedure as for Table 2 and Table 3 in the main paper: we train on the 50% smallest graphs and test on the 10% largest, and we employ $\\lambda=0.1$ and $C=(0.8,0.9)$ without any sort of hyperparameter tuning. We notice our method proves highly effective with improvements of up to 20%.\n\n| Dataset | Deezer | Deezer |\n|---------|--------------|--------------|\n| Reg. | No | Yes |\n| PNA | 0.59 +- 0.06 | 0.64 +- 0.07 |\n| GCN | 0.49 +- 0.10 | 0.59 +- 0.06 |\n| GIN | 0.55 +- 0.08 | 0.61 +- 0.07 |\n\nWe will include these results in the final version of the paper.\n",
" We would like to add that we have performed an additional experiment as suggested by the reviewer.\n\nIn fact, we have tested our method on a dataset of a different domain. In more detail we have tested it on a graph classification dataset of social networks: the Deezer dataset introduced in \"Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs\", Rozemberczki et al., CIKM 2020.\n\nThe results are shown below (results for a model trained with and without our regularization). We followed the same evaluation procedure as for Table 2 and Table 3 in the main paper: we train on the 50% smallest graphs and test on the 10% largest, and we employ $\\lambda=0.1$ and $C=(0.8,0.9)$ without any sort of hyperparameter tuning. We notice our method proves highly effective with improvements of up to 20%.\n\n| Dataset | Deezer | Deezer |\n|---------|--------------|--------------|\n| Reg. | No | Yes |\n| PNA | 0.59 +- 0.06 | 0.64 +- 0.07 |\n| GCN | 0.49 +- 0.10 | 0.59 +- 0.06 |\n| GIN | 0.55 +- 0.08 | 0.61 +- 0.07 |\n\nWe will include these results in the final version of the paper.\n",
" We thank all the reviewers for their time and effort. Together with replying to all the questions, we have performed an additional experiment as suggested by reviewers MVo5 and k3VB.\n\nIn fact, we have tested our method on a dataset of a different domain. In more detail we have tested it on a graph classification dataset of social networks: the Deezer dataset introduced in \"Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs\", Rozemberczki et al., CIKM 2020.\n\nThe results are shown below (results for a model trained with and without our regularization). We followed the same evaluation procedure as for Table 2 and Table 3 in the main paper: we train on the 50% smallest graphs and test on the 10% largest, and we employ $\\lambda=0.1$ and $C=(0.8,0.9)$ without any sort of hyperparameter tuning. We notice our method proves highly effective with improvements of up to 20%.\n\n| Dataset | Deezer | Deezer |\n|---------|--------------|--------------|\n| Reg. | No | Yes |\n| PNA | 0.59 +- 0.06 | 0.64 +- 0.07 |\n| GCN | 0.49 +- 0.10 | 0.59 +- 0.06 |\n| GIN | 0.55 +- 0.08 | 0.61 +- 0.07 |\n\nWe will include these results in the final version of the paper.\n\nWe also kindly remind the reviewers that we remain available for further clarifications.\n",
" Together with the previous response, we add some additional comments on some aspects raised be the reviewer.\n\n#### Other comments\n- We thank the reviewer for highlighting that it is best to explicitly write that the coarsening ratio reflects the percentage of retained nodes.\n- CKA is designed exactly for the purpose of comparing representations of neural networks, and is designed to work with representations of possibly different sizes. CKA is commonly used to study if different models are encoding the same information, or if different layers of the same model are containing the same information (see [32]).\n- Regarding the computational complexity: generating the coarsened datasets is a *pre-processing* step that only needs to be done *once*, and here the complexity will depend on the choice of the coarsening algorithm. At training time, for a given batch, there is an additional forward pass for each considered coarsening ratio and then there is the computation of the loss. Both these operations require constant additional time, and in practice we notice an overhead of up to 50%, as written in the paper. At inference time there is no additional computation to be performed, and so no overhead of any kind.",
" We thank the reviewer for the thoughtful review and for highlighting the quality of the paper. Before answering the raised questions, we would like to clarify some aspects of our evaluation procedure. \n\n### Clarification of evaluation procedure\n1 - As [6] is (to the best of our knowledge) the only paper tackling the size-generalization problem of GNNs in a graph classification setting with access only to small graphs during training, we followed exactly their evaluation procedure to properly compare against it. We used the same datasets, train/val/test splits, and hyperparameters (these were in fact obtained by the authors of [6] through an hyper-parameter tuning procedure to find the best configuration for the given datasets).\n\n2 - To identify the values of $\\lambda$ and $C$ to use for our method, we tried different values ($\\lambda= \\{1.0, 0.1, 0.01, 0.001\\}$ and $C=\\{ (0.5), (0.8), (0.9), (0.5,0.8), (0.5,0.9), (0.8,0.9)\\}$) on the *validation* set using a GIN model on the PROTEINS dataset. We found that $\\lambda=0.1$ and $C=(0.8,0.9)$ performed best on the *validation* set, and so we tried those on the validation set of other datasets and with other models. We noticed that results on the validation sets were good (as they were leading to better results on the validation sets with respect to a model trained without our regularization), and so we decided to keep the same values of $\\lambda$ and $C$ for all datasets and models to show that our method can work without extensive (and expensive) hyperparameter tuning (and we believe this is further proof of the effectiveness of our method).\n\n3 - We first obtained the results for Table 1, Figure 2, Table 2, and Table 3, in the main paper. Then, only *after* having those results, we have performed the ablation study. Please notice that, in the results shown in the ablation study, there are several settings which lead to higher results than what is shown in Tables 2 and 3. Using those results in Tables 2 and 3 would have made our method look even stronger, but clearly would have invalidated our evaluation procedure as we would have used the test set to find those configurations, which instead is not the case.\n\nTo conclude, we remark that the results shown in the ablation study are __not__ coming from an hyperparameter tuning procedure, but were performed __after__ all the tuning (done on the validation set) and evaluation results were obtained. The ablation is used to understand the impact of the components of our method only *after* having evaluated it, as is the standard procedure for ablation studies. We have now specified this in the appendix, and we thank the reviewer for highlighting this aspect. \n\n### Answer to questions\n- Q1. We thank the reviewer for highlighting this aspect. For space limitations we weren't able to include precise details for the procedure. We will however include these in the Appendix. The values of $\\lambda$ used here is $0.1$, and the value of $C$ is $(0.8, 0.9)$ as for the results in Tables 2 and 3.\n- Q2. As the same coarsening ratio is applied to every graph in the dataset, all graphs are coarsened by the same amount. This means that (except in pathological cases with extremely small coarsening ratios), the \"proportions\" in the dataset will remain the same, in the sense that, for example, the ratio between the number of nodes in the largest graph and the number of nodes in the smallest graph should stay almost the same. With low coarsening ratios, there is however the problem that the coarsening can be too aggressive and the graphs may lose all their distinguishing topological information. This is one of the reasons why very low coarsening ratios lead to poor performance (as shown in our ablation study).\n- Q3. For our evaluation we followed the datasets proposed by the current state-of-the-art ([6]), and we analyzed the impact of different coarsening ratios in our ablation study. In this work we wanted to focus on introducing a method for improving size-generalization in GNNs, and we agree that trying our method on different kinds of networks and applications (and with application-specific design choices) is a great direction for future work. ",
" We thank the reviewer for the insightful comments, and for highlighting the importance of the problem and of our results.\nWe answer below to the questions raised by the reviewer, and we remain available for further clarifications.\n\n### Answer to questions\n- Q1. The choice of the datasets is simply coming from relevant prior work which we have to compare against. As [6] is (to the best of our knowledge) the only paper tackling the size-generalization problem of GNNs in a graph classification setting with access only to small graphs during training, we followed exactly their evaluation procedure to properly compare against it.\n- Q2. Our method is trying to compare *distributions* of node embeddings, so we needed distribution divergences, and not element-wise divergences. Considering the literature on the use of distribution-wise divergences in machine learning, the choice was between MMD and CMD. We chose the latter as MMD can be seen as a simpler version of CMD (in fact the two are exactly the same if one removes the higher order moments component from CMD), and previous works ([41,58,60]) highlighted the superiority of CMD (for example empirical results show that CMD is less susceptible to the weight with which the regularization is added to the loss).\n- Q3. We agree that there are many areas of interest in which our work can be applied, and we believe this is a sign of its potential impact and of the importance of the tackled problem. In this work we focused on proposing a new method for the problem of size-generalization that can be applied on any GNN. Given the space limitation of nine pages, we leave the specialised applications of our technique for future work.\n- Q4. As the datasets are strongly unbalanced (as can be seen from the dataset statistics in the appendix), the accuracy values provide no significant signal. Just to give an example, if you have a model that assigns *all graphs* in the test set to the majority class, it would achieve *at least* ~80% accuracy on *all* datasets, even though it would classify incorrectly *all* graphs belonging to the minority class. Using the MCC as a metric allows us to instead clearly study the quality of the models in this unbalanced setting. This is also confirmed in previous work ([6]) which only reports MCC values for the same reason.\n- Q5. We thank the reviewer for the remark, we have added the training times in the appendix to confirm the 50% overhead.",
" We thank the reviewer for the time and comments. We believe that a novel combination of existing techniques to solve an important practical issue is a valuable contribution.\nWe answer below to the questions raised by the reviewer, and we remain available for further clarifications.\n\n## Answer to questions\n- Q1. Yes we do try different coarsening algorithms. The results are shown in the paper in Section 5.2 where there is a subsection titled \"Changing Coarsening Method.\", in particular in Figure 3 it's possible to notice how the performance change using 4 different coarsening methods. Two of these are specialised graph coarsening techniques, while the other two are baseline graph clustering methods. We notice that our regularization strategy is robust to the choice of coarsening algorithm.\n",
" The paper focuses on how to generalize GNNs to graphs of different sizes in graph classification. Previous works either add an ad-hoc strategy or require access to test graphs. In this work, the authors propose a regularization method based on graph coarsening techniques to improve the size-generalization of GNNs for graph classification. Empirical results show that the performance will be improved up to 30% with the proposed regularization method. Strengths:\n* The paper proposes a regularization strategy that improves the size-generalization ability of GNNs for graph classification. \n* Empirical results show that GNN models with the proposed regularization strategy achieve comparable or better size-generalization performance than baselines. \n\nWeaknesses:\n* The method is straightforward and the novelty is limited. The paper combines existing coarsening techniques and an existing metric (i.e. CMD) for the regularization. The paper uses the SGC coarsening algorithm in the experiments. Do the authors try other coarsening methods and evaluate how the coarsening algorithm affects the regularization performance? The authors discuss the limitation in Sec 3.1 about the assumption that there are some size-invariant properties that determine the label. ",
" The paper proposes a new regularization strategy for improving size generalization in GNNs. The high-level idea consists of applying coarsening algorithms to original training data and then enforcing node embeddings from the original graph and coarsened ones to be closer. The paper considers four benchmarks for graph classification to assess the effectiveness of the proposal.\n\n Overall, the paper reads well and the proposed method is rather simple and general. Also, the problem of interest seems relevant. \n\nHowever, the paper lacks stronger motivation behind some design choices, such as graph coarsening algorithms and discrepancy metrics. For instance, CMD is chosen because \"it has proven to be successful and stable as a regularization term for non-linear models\". Also, the experimental setup is weak as it only includes four molecular datasets, and follows an existing setup (Bevilacqua et. al., 2021). Moreover, the analysis section provides no principled assessment of the method, mostly showing that the proposed regularization somehow affects node representations. \n\nStrengths\n- Simplicity: the idea is somehow intuitive even though it is not theoretically grounded.\n- Promising results\n\nWeaknesses \n- Limited evaluation setup\n- Poor understanding regarding the applicability of the proposal (only general comments are made in Section 3.1 --- Limitations) \n - As stated in Section 3.1, the proposal assumes labels are not too sensitive to size. Is there any connection between this assumption and the chosen benchmarks? Why is the proposed method only validated using four molecular datasets?\n- It would be helpful to have an ablation study considering different discrepancy metrics. For instance, would simple distance-based metrics obtain good results? \n- The proposal seems to have the potential for improving pre-training. When evaluating existing strategies, people often split molecules according to their scaffold [1]. Showing gains to that task would increase the impact of the proposal.\n- Accuracy results could go in the appendix.\n- Showing training times would support the claim that the proposal incurs a 50% overhead.\n\n[1] https://arxiv.org/pdf/1905.12265.pdf The paper briefly describes the limitations of the proposal. ",
" This paper presents a simple and intuitive regularization technique to make graph networks generalize better to graphs with different sizes, by employing coarsened representations of the same graph and encouraging alignment of node embeddings between such graphs.\n\n**EDIT after rebuttal:**\n\nI thank the authors for clarifying important aspects of the evaluation procedure. It appears my assumptions were incorrect. I therefore recommend that the paper is accepted, as it provides a simple, clear, novel and effective mechanism to tackle an interesting problem for the graph machine learning community, something which is rare these days. It is rare to find and review a paper which clearly presents a simple, intuitive and sensible idea to this kind of conferences. Overall, it was a pleasure to read and very clear under all aspects. The paper seems technically sound, although it is not clear how the specific choice of CKA influences the results, as it compares graph representations of original and coarsened graphs which have different dimensions (if my understanding is correct). The underlying motivations of the paper is clear and the proposed solution seems simple enough to be used in everyday research/industrial contexts. Perhaps it is not very clear what is the computational complexity of the regularization loss, and the authors may want to consider Section 4 after the experimental setup is introduced (see my first question below).\n\nDespite the paper being of high quality in many respects, I have serious concerns about the experimental protocol which led to the creation of Table 3, the main empirical evaluation of the paper. From lines 219-234, it emerges that most hyper-parameters where chosen from [6], which is reasonable as long as the empirical setup stays the very same. \n\nWhat appears definitely less reasonable and fundamentally wrong seems to be:\n- using the same settings of $\\lambda$ and $C$ for all datasets and models after looking at a single dataset (see lines 224-225) and Table 3 in the appendix B\n- lack of information, either in the appendix or in the main paper, about the values of $\\lambda$ cross-validated against the $\\textbf{validation}$ set\n- Evidence, by looking at Table 4 (caption, main paper) and Tables 2,3 (appendix B) that the hyper-parameters have been set after looking at the MCC performances on the $\\textbf{test}$ set. This would mean that the authors cherry-picked the results that maximize the performances on the test, rather than choosing them on the validation set. The authors are strongly encouraged to honestly and openly comment on this; it is possible that my evaluation is incorrect, but the combination of lines 275-277 with 224-225, together with the caption of Table 4, seems to point in this direction.\n\nMinor comments:\n\n- while one usually assumes that the coarsening ratio reflects the percentage of retained nodes, it may be helpful to explicitly write this in the paper.\n- It is known that a structure-agnostic baseline can produce state of the art results on PROTEINS, whereas it struggles on NCI1 for instance. It may have been a good idea to make an hyper-parameter study on the latter rather than the former dataset (but on the validation set) - Section 4: we have no details about the experimental procedure used here. Which lambda values have been used here to produce Table 1? Maybe it would be better to include this analysis in Section 5, so as to improve the overall organization of the paper.\n- How does the distribution over the sizes of the graphs changes after pre-processing? This analysis could be interesting in datasets where graphs' sizes greatly vary, e.g. social datasets from TUDataset benchmarks page.\n- From Section 3.1., why not trying a single experiment with much higher coarsening ratios? This could be a nice addition to the paper. Social datasets could be a good option here as well. Limitations of the paper mainly lie in the experimental results, which may be invalid if my argument is correct."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"RM8LhCQTT1u",
"LlWK3k-Yon5",
"BwRiFZfXoTN",
"LlWK3k-Yon5",
"nips_2022_wOI0AUAq9BR",
"AGnyxYgVy6c",
"bHTn3dS_qV-",
"T6LZ8EJaoxP",
"9Bxcs1YmGip",
"nips_2022_wOI0AUAq9BR",
"nips_2022_wOI0AUAq9BR",
"nips_2022_wOI0AUAq9BR"
] |
nips_2022_GkDbQb6qu_r | CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers | Development of transformer-based text-to-image models is impeded by its slow generation and complexity, for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel autoregressive generation.
We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, a cross-modal general language model (CogLM), and fine-tune it for fast super-resolution.
The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images. | Accept | This paper describes a less auto-regressive approach for text-to-image generation. Reviewers were somewhat split, with one reject and one strong accept. Overall, the results the authors get are perhaps less compelling given the progress the field has made over the past few months since submission time, but I think the focus on less autoregression for this problem is important and potentially useful for other approaches to this problem, and also the authors had far fewer computational resources and smaller datasets than some of the other recent work, which makes me feel there is a chance the less-auto-regressive generation they employ will be useful for other text-to-image generation projects. I'm slightly less certain about this one because the results don't seem as compelling given works published after the submission deadline, and also the lack of experiments to investigate that this approach can be used with other text-2-image generation works. Still, it seems like an important direction that could use more papers, so I think this should be accepted. The bilingual generation is a bonus. | train | [
"ZbNFfm1nN5",
"V4Scasj2jIj",
"Q0zUciAAAh",
"Iv_AWJPjeo8",
"au7ym-X3icx",
"A5uUHVKPAcJ",
"O6pz7RUfDag",
"IS-KPcyimMM",
"SR9o2rOH40",
"ag5GOZXc_5P",
"A_tdGDBj_0u"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification and additional information!",
" I appreciate authors for providing the extra visualizations. Most of my questions are resolved, but still I am not very convinced but the masking strategy. I raise the score to weak accept based on overall contributions. ",
" \nThank you very much for your review. We will explain your concerns point by point.\n\n> The main difference is the new masking strategy in CogLM where tokens inside the mask region are trained to predict the next token based on past masked-tokens and non-masked context tokens. \n\nPlease also consider the LoPAR upsampling method, which generates samples much faster than the pure auto-regressive way for high-resolution images. Our local attention kernel, attention upweighting, clustering sampling can also benefit the other auto-regressive generative models.\n\n> It will be great if authors can provide more visualizations, especially those with higher resolution. I can not judge the quality of generated images from just a few hand-picked ones.\n\nOf course, we can. There are some unfiltered batches of images in different categories. \n\n- people: https://imgur.com/uM3LWLM\n- scenes: https://imgur.com/cyznGD4\n- animals: https://imgur.com/TIZT97C\n- objects: https://imgur.com/bH9FJ9G\n- scientific: https://imgur.com/pQSxMcd\n\n> What is FID-k in Table 1? I assume it means radius of the Gaussian Filter but couldn’t find any explanation in the text. If my guess is correct then shouldn’t FID–0 be the most important number? Does this mean that CogView2 is worse than multiple baseline methods in terms of image quality?\n\nThe FID-k is the metric proposed in the DALL-E paper[1]. It blurs the details and compare the main contents. FID-0 cares a lot about details. For example, DALL-E has worse FID-0 (and better FID-k) than DM-GAN and DF-GAN, but it is still seen as a great advancement and get higher human evaluation scores. So FID–0 is not always the most important number. A recent work (https://www.cs.cmu.edu/~clean-fid/) also reveals that even the jpeg quality has a great influence on FID-0. \nIn our experiments, we care more about human evaluation performance, where CogView2 outperforms CogView, LAFITE et al. by a large margin. \n\n[1] Ramesh, Aditya, et al. \"Zero-shot text-to-image generation.\" 2021.\n\n> As stated before, I have concerns about the masking strategy. I do not see a motivation for joint learning of autoregressive generation and bidirectional mask prediction. Authors are encouraged to provide more ablation studies to support the design of CogLM and explain why it is better than the training in CogView.\n\nAs discussed in Line 31-39, we want to overcome the defect of unidirectionality of usual auto-regressive models. CogLM naturally supports text-guided infilling tasks, as showed in Appendix B, and make the finetuning for itersr very easy, because the itersr task is very similar to the mask prediction in the CogLM pretraining.\n\nTo train a versatile transformer in a simple way itself is also a popular topic. For example, the most recent work (28, July) of OpenAI proposed an extremely similar model as CogLM, called FIM[1] to fulfill infilling jobs, and verify its performance for left-to-right generation.\n\n[1] Efficient Training of Language Models to Fill in the Middle. https://arxiv.org/pdf/2207.14255.pdf\n\n> For human quality evaluation, I wonder why authors do not include DALL-E as part of the test? That should be a key baseline.\n\nBecause DALL-E is not open-source. Many text-to-image models are not open-source, so that we cannot include them in the evaluation. This also highlights the value of open-sourcing of CogView2.\n\nIf our answer above ease your concerns, could you increase your rating a bit? Please tell us if you have further concerns.\n\n",
" \nThank you very much for your careful and insightful review. We will explain your questions and concerns point by point.\n> “Better” text-to-image generation: As the authors acknowledged in the paper, the automatic metrics on MS-COCO may not be the best way to evaluate these models, hence the claim of CogView2 being “better” (in the title) or competitive (in the abstract) compared to other models such as DALLE2 can use some scoping/hedges. Outside of this paper, there have been some informal qualitative comparisons between several models, which seem to give the impression that CogView2 is not strictly better than other models.\nI think some scoping about text is necessary. \n\nThank you for your advice. The ``better'' mainly compares CogView2 with CogView, and we will modify the abstract to lower the description in the revised version. We will also add some scoping in the description of the codes.\n\n> It would be interesting to see more in-depth analysis on this bilingual aspect of the model.\n\nThank you for your suggestions. It would be interesting to look into this. Currently, we have some observation on this topic. \n(1) It is possible some words only appear in Chinese or English data. \n(2) we find that the multi-mapping between languages could attribute for a large part of the performance gap. For example, “一个提手提箱的人的照片” can be translated into English as \"a photo of a person with a suitcase\" or \"a photo of human and suitcase\" (not very native). We find that the former translation performs much better because \"human\" is not usually used in these cases, even though both \"human\" and \"person\" are \"人\" in Chinese. We anticipate that using a pretrained NLP model for deep text understanding, as in the recent work Imagen, can largely solve this problem.\n\n> What were the main challenges/blockers for directly comparing different models’ inference time in an end-to-end fashion?\n\nTo compare with the specific previous text2image models is quite hard, because the speed is relevant to the model size, deployment efforts (e.g., TensorRT) and the machines. To compare LoPAR with AR-based models under the same conditions, we add a table of the time and FLOPs for a 4,096 sequence on an A100-40GB GPU with different AR-based methods. The model configurations are the same as the pretrained CogLM 6B. All AR-based methods, e.g. DALL-E, CogView, Make-A-Scene, et al. should share the same trends with the table. \n\n| | FLOPs | time | Memory (inference) |\n| ---- | ---- | - | - |\n|Forward (also teacher forcing training) | $1.17*10^{14}$| 858 ms | 5,041MB\n|Autoregressive generation (No cache)| $4.81*10^{17}(4095\\times)$ | about 1h | 5,041MB\n|**Autoregressive generation (cached)** | $1.17*10^{14}(1\\times)$ | 225.9s | 4,865MB\n|LoPAR| $7.02*10^{14}(6\\times)$ | 4.89s | 5,041MB\n|LoPAR+local_attention | $5.82*10^{14}$ | 3.41s | 352MB\n\nAs discussed in the second paragraph (Line 13-17), our motivation is to increase the degree of parallelism for acceleration, even with more FLOPs. Autoregressive generation with cached hidden states have the same FLOPs with a teacher-forcing forward step, but is much slower (858ms vs 225.9s). We draw a diagram (https://i.imgur.com/T0Y9io2.png) for better understanding this. For LoPAR, it is exactly N (N=6 in our setting) times and FLOPs of forward steps. \n\nThe results show that we reduce the time for generation from 225.9s to 3.41s (not including the first hierarchy of pure AR) and the memory from 5.4GB to 352MB. We will add the above part to a revised version.\n\n> Discussion on failure modes: Even from the cherry picked examples in Figure 1, there are multiple failure modes observed (e.g. different numbers or lengths of fingers). What are the main types of failure modes the authors or human annotators observed? Any difference when it’s in English vs. Chinese?\n\nIn our observation, the main failure modes include:\n(1) Artifacts on repeated patterns. We find these problems usually happen on grass in the DSR step. Since this seldom happens for other objects, an assumption is that it is data-related. There is no enough high-resolution grass images in our datasets.\n(2) Details on eyes and fingers. There are some inconsistency on eyes, and sometimes the number of fingers are wrong. We find similar artifacts in DALL-E2 examples but no in Imagen. We hypothesize the reason is that the iterative steps are not enough.\n(3) Position bias. The objects or humans are more likely to collapse than the middle ones.\n\nWe didn't find obvious differences about the failure modes for English or Chinese texts.\n\n> Nitpicking\n\nThank you very much for your carefully reviewing and valuable suggestions. We will update them in a revised version.\n",
" \n\nThank you very much for your valuable review. We will answer your questions point by point.\n> Have you tried other local parallel generation manners? e.g. within a local patch, left to right or top to bottom generation in a non-autoregressive fashion?\n\nYes, these are methods we first tried but found that if adjacent tokens are generated at the same step, they were sometimes not coherent, because the two position are very relevant, but don't know each other during the step of generation.\n\nFor example, we upsample a batch of unfiltered generated images of \"一群穿着格子衬衫的程序员 (a group of programmers wearing plaid T-shirts)\" using LoPAR and a top-to-bottom LoPAR (https://imgur.com/gallery/KYsMesa). We can see that the sleeves in subfigures 2 and 4 have more consistent patterns in diagonal style LoPAR.\n\n> The attention mask shown in Figure2 is somehow difficult to understand. Do you mean the token in green box is masked and going to be generated? This slightly confuses me.\n\nSorry for this confusion. We will revise this part for a better introduction. \nThe green part is actually not masked physically, but applied a casual mask (as in the right figure) so that they cannot be seen by tokens outside the green parts. We use the term \"mask region\" for the consistency with BERT/MAE. In this way, we can train infilling and token-by-token in the same way with very little modification on the sequence.\n\n> What about the speed of iterative super-resolution compared with stacked image super-resolution models in Imagen?\n\nImagen has two levels of diffusion-based super-resolution. According to the paper, they use a DDIM cosine noise schedule (4,000 diffusion steps) for 64->256, and 1,000 diffusion steps for 256->1024. The number of steps means that we need to forward the model that times (4,000 and 1,000) during generation naively. From the paper, it is not clear whether they use any fast generation methods. In contrast, CogView2 need 6 forward steps for super-resolution. ",
" > As introduced in the introduction section the CogLM is general for many tasks like infilling tasks, and image captioning, but these capabilities are not presented in the paper.\n\nPlease see Appendix B for some results of the text-guided infilling task, and we use the image caption for post-selection as in CogView. We didn't report details about the image captions because it is not the focus of our work.\n\n> The comparison with current works is not convincing. For example, CovViiew2 didn't show significant improvements over previous methods on FID-0. Previous SOTA methods like Make-A-Scene and DALL-E2 did not report the results of FID-1 to FID-8 for comparison. \n\nFirst, we didn't claim CogView2 achieve better performance than DALL-E2, while instead we analyze the difference in section 6.\n\nSecondly, as we stressed in Line 270, **we need to downsample the images back to 256*256** for a meaningful FID comparison, which largely reduces the usage of our super-resolution method.\n\nThirdly, FID itself is not a stable metric. According to https://www.cs.cmu.edu/~clean-fid/, even jpeg quality 75/100 can create an up to 20 FID difference. We also find whether center-crop COCO images create a >4 FID difference on this benchmark. We care more about human evaluation performance, where CogView2 outperforms CogView, LAFITE et al. by a large margin. However, many text-to-image models are not open-source, so that we cannot include them in the evaluation. This also suggests the value of open-sourcing of CogView2.\n\n> Besides, some latest works like latent space diffusion and VQ-Diffusion are missed in the table for comparison.\n\nLatent space diffusion first appeared as an unconditional generation paper, and updated a text-to-image model at the same time of our paper. Thank you for bringing it back to our scope. We will compare it in a revised version. We already cited VQ-Diffusion and will add it to the table. These methods are diffusion-based and not aim to generate high-resolution images.\n\n> The quality and fidelity of these generated samples presented in the paper are not that impressive. First, the generated image is still blurry. Second, we can observe clear unreasonable structures for the human hands or faces.\n\nThe area is indeed developing very fast, and the recent DALL-E2, Imagen (after submission) and Parti (after submission) show better quality. However, The current text-to-image model is a large project, the final performance depends on many things, e.g. data, framework, resolution, parameters, et al. Our paper gives a concrete solution for a certain aspect -- the generation of high-resolution autoregressive models. In our opinion, this should also be encouraged. We discussed the way to improve our model in section 6, and the lack of deep text understanding revealed in Imagen might be the main reason of the gap, which is orthogonal to the contribution in this paper.\n",
" Thank you for your feedback, and we will address your concerns point by point as follows:\n> The training and inference efficiency of text2image generation are not evaluated or compared with previous methods with wall clock time or FLOPS.\n\nThank you greatly for the valuable suggestion to add a clear table for FLOPs and time. In the submitted verison, we report the final achievement (10x faster than CogView) in Line 57. Here we present the time and FLOPs for a 4,096 sequence on an A100-40GB GPU with different AR-based methods. The model configs are the same as the pretrained CogLM 6B. \n\n| | FLOPs | time | Memory (inference) |\n| ---- | ---- | - | - |\n|Forward (also teacher forcing training) | $1.17*10^{14}$| 858 ms | 5,041MB\n|Autoregressive generation (No cache)| $4.81*10^{17}(4095\\times)$ | about 1h | 5,041MB\n|**Autoregressive generation (cached)** | $1.17*10^{14}(1\\times)$ | 225.9s | 4,865MB\n|LoPAR| $7.02*10^{14}(6\\times)$ | 4.89s | 5,041MB\n|LoPAR+local_attention | $5.82*10^{14}$ | 3.41s | 352MB\n\nAs discussed in the second paragraph (Line 13-17), our motivation is to increase the degree of parallelism for acceleration, even with more FLOPs. Autoregressive generation with cached hidden states have the same FLOPs with a teacher-forcing forward step, but is much slower (858ms vs 225.9s). We draw a diagram (https://i.imgur.com/T0Y9io2.png) for better understanding this. For LoPAR, it is exactly N (N=6 in our setting) times and FLOPs of forward steps. \n\nThe results show that we reduce the time for generation from 225.9s to 3.41s and the memory from 5.4GB to 352MB. We will add this\n part to a revised version.\n\n\nTo compare with the specific previous text2image models is quite hard, because the speed is relevant to the model size. All AR-based methods, e.g. DALL-E[1], CogView[2], Make-A-Scene[3], et al. should share the same trends with the above table. \nThe diffusion-based methods have a totally different framework, whose FLOPs itself is $N\\times$ that of forward, where $N$ is the diffusion steps (usually 1,000). The most prevalent acceleration is to make a trade-off between quality and speed using spaced sampling [4] or Analytic DPM [5]. They are not appropriate to be directly compared with AR-based methods.\n\n[1] Ramesh, Aditya, et al. \"Zero-shot text-to-image generation.\" International Conference on Machine Learning. PMLR, 2021.\n\n[2] Ding, Ming, et al. \"Cogview: Mastering text-to-image generation via transformers.\" Advances in Neural Information Processing Systems 34 (2021): 19822-19835.\n\n[3] Gafni, Oran, et al. \"Make-a-scene: Scene-based text-to-image generation with human priors.\" arXiv preprint arXiv:2203.13131 (2022).\n\n[4] Nichol, Alex, et al. \"Glide: Towards photorealistic image generation and editing with text-guided diffusion models.\" arXiv preprint arXiv:2112.10741 (2021).\n\n[5] Bao, Fan, et al. \"Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models.\" arXiv preprint arXiv:2201.06503 (2022).\n\nIf our answer and additional statistics above ease your concerns, could you increase your rating a bit? Please tell us if you have further concerns.\n\n",
" This paper presents CogView2, which aims for faster and better text2image generation based on its previous work CogView. The key idea is to adopt the spirit of coarse to fine and generate low-resolution tokens first(20x20) and then do a super-resolution to 60x60 tokens for high-resolution images. Strengths:\n1. This paper tries to solve two critical issues of the current auto-regressive-based text2image model: quality and efficiency. \n2. This paper provides many visualization consequences in the paper.\n\n\nWeaknesses:\n1. This paper claims too many things but does not verify them clearly in experiments. a) The training and inference efficiency of text2image generation are not evaluated or compared with previous methods with wall clock time or FLOPS. 2) As introduced in the introduction section the CogLM is general for many tasks like infilling tasks, and image captioning, but these capabilities are not presented in the paper.\n\n2. The comparison with current works is not convincing. For example, CovViiew2 didn't show significant improvements over previous methods on FID-0. Previous SOTA methods like Make-A-Scene and DALL-E2 did not report the results of FID-1 to FID-8 for comparison. Besides, some latest works like latent space diffusion and VQ-Diffusion are missed in the table for comparison.\n\n3. The quality and fidelity of these generated samples presented in the paper are not that impressive. First, the generated image is still blurry. Second, we can observe clear unreasonable structures for the human hands or faces. \n Please refer to the weaknesses part. Yes.",
" Auto-regressive transformer-based text-to-image model can generate elegant pictures, but also face low-generation speed problems for high-resolution image generation. This is due to the extremely long sequence length and the auto-regressive decoding schema. In this work, the authors propose Cogview2, which is a hierarchical transformer and adopts local parallel auto-regressive generation instead of the global AR. The proposed Cogview2 achieves comparable or even better results than Cogview and speeds up the inference a lot. Overall, Cogview2 is an efficient version of Cogview but more fast benefiting from the local parallel decoding. ## Strength\n- The proposed Cogview2 consists of several modules. First, they use the CogLM to generate a preliminary image with 20 $\\times$ 20 tokens. The following super-resolution is based on this initial image, and I think a useful finding here is that we do not need to predict an image with a high resolution from the scratch, since a small 20 $\\times$ 20 tokens may involve the information given the text. This may motivate the following researcher to design more efficient text-to-image models.\n- Local parallel autoregressive decoding is interesting. Traditional AR for text-to-image generation flatten the image into a 1-D sequence and generate the visual tokens from left to right. It is quite time-consuming and also ignores the spatial correlations.\n- Thanks to the flexibility of pretrained general language model, Cogview2 can perform image completion naturally \n- Achieve comparable results with auto-regressive modeling while speeding up the inference quite a lot.\n\n## Weakness\nThere is no major concerns for me. Maybe, I would like to see a totally non-autoregressive model which can deliver comparable results with AR ones. \n- Have you tried other local parallel generation manners? e.g. within a local patch, left to right or top to bottom generation in a non-autoregressive fashion?\n- The attention mask shown in Figure2 is somehow difficult to understand. Do you mean the token in green box is masked and going to be generated? This slightly confuses me.\n- What about the speed of iterative super-resolution compared with stacked image super-resolution models in Imagen? There is no limitation discussed in the current version. The authors have addressed the potential negative societal impact.",
" This paper proposes a pretraining method, a Cross-Modal General Language Model (CogLM), that masks both image and text tokens in input and learns to predict them in an autoregressive manner, while handling bidirectional context. By fine-tuning a pretrained transformer with this approach, the authors construct a hierarchical model, CogView2, which first maps a generated image into a larger image (direct super-resolution) and refines local patches (local parallel autoregressive), thus improving resolution as well as inference speed.\n\nThe experiments in the paper suggest that CogView2 performs comparable to other models despite its smaller model size and training data size. Meanwhile, the approach results in a considerable reduction in model run times (e.g. 10x faster than its predecessor, CogView).\n\nTraining and evaluation in this paper considers text in both English and Chinese.\n Strengths\n* The paper is clearly written and aided with great visualizations. Overall, the problems and proposed solutions are well-motivated and supported with appropriate evidence or justification (e.g. findings from existing literature, their own empirical observations, or limitations due to compute resources).\n* The topic is timely; the effort for making the task of text-to-image generation faster and better is of great interest to the community.\n* Their approach of generating a low-resolution image and refining it to be a high-resolution image is simple and straightforward. The use of various techniques is adequately justified (e.g. masking strategy and attention mask) and ablated (e.g. clustering sampling and attention upweighting).\n* “Faster” text-to-image generation: One of the main contributions of this paper is to make text-to-image generation faster. Although the experiment section doesn’t directly compare different models’ inference time in an end-to-end fashion, the last paragraph of the introduction section mentions that their model run time for local parallel autoregressive generation is 600x faster and overall 10x faster than their previous model.\n* “Better” text-to-image generation: The experiments follow popular benchmarking practice and discuss the gap between automatic metrics and human evaluation. With automatic metrics, CogView2 performs comparable to other methods on MS-COCO based on Frechet Inception Distances. Based on human evaluation, CogView2 performs better on all metrics (image clarity, texture quality, and relevance to the caption) than CogView, Lafite, and DF-GAN.\n\nWeaknesses\n* “Better” text-to-image generation: As the authors acknowledged in the paper, the automatic metrics on MS-COCO may not be the best way to evaluate these models, hence the claim of CogView2 being “better” (in the title) or competitive (in the abstract) compared to other models such as DALLE2 can use some scoping/hedges. Outside of this paper, there have been some informal qualitative comparisons between several models, which seem to give the impression that CogView2 is not strictly better than other models. \n * https://huggingface.co/spaces/THUDM/CogView2\n * https://twitter.com/bhagatsurya2/status/1542824988092530689\n * https://www.reddit.com/r/MachineLearning/comments/vkvq0j/r_cogview2_faster_and_better_texttoimage/\n* Bilinguality: Since this paper considers text in both English and Chinese and notes that “Chinese input produces better results than English input” (https://huggingface.co/spaces/THUDM/CogView2), it would be interesting to see more in-depth analysis on this bilingual aspect of the model. The authors state that they used [BOE] to denote the beginning of English text and [BOC] for Chinese text, but do not justify the decision or discuss any findings based on two languages. Questions\n* What were the main challenges/blockers for directly comparing different models’ inference time in an end-to-end fashion?\n\nSuggestions\n* Discussion on failure modes: Even from the cherry picked examples in Figure 1, there are multiple failure modes observed (e.g. different numbers or lengths of fingers). What are the main types of failure modes the authors or human annotators observed? Any difference when it’s in English vs. Chinese?\n\nNitpicking\n* In Figure 2: “Supports tokenization of both Image Chinese and English” → “Supports tokenization of both images and texts in Chinese and English”\n* In Section 2: “DF-GAN, et al.” → “and DF-GAN.”\n* In Section 2 and throughout the paper: use either “VQ-VAE” or “VQVAE” to be consistent\n* In Section 3.1: consider moving the paragraph “In NLP, the General Language Model [...]” to Related Work\n* In Section 3.1: $l$ and $r$ are not defined\n* In Section 3.1: “where [BOE], [BOC] are separators meaning beginning-of-English and beginning-of-Chinese” → “where [BOE] and [BOC] are separators to indicate the beginning of English text and that of Chinese text” \n* In Section 3.1: “Ideally, the two tasks should be separated” is not justified\n* In Section 3.2: “Image, Chinese and English” doesn’t really type check; maybe it should be “Image and Text in Chinese and English”?\n* In Section 5.2: “Frechet Inception Distances and Inception Scores” → “Frechet Inception Distances (FID) and Inception Scores (IS)”\n* In Section 6: clarify “third-level super-resolution)\n* In Section 7: “it is possible to train a classifier to distinguish the real and CogView2-generated images according to the texture features” is not supported with any evidence\n I think some scoping about text is necessary. The paper generally assumes that input text can be any text in English and Chinese; their github repository explicitly says “any text” (https://github.com/THUDM/CogView2). For preciseness, it would be helpful to note any potential/practical limitations more clearly. For instance, CogLM can accept up to 111 text tokens (Section 3.2). And presumably, there aren’t that many short text inputs (e.g. one or two words) in the training data – then what would be a reasonable minimum length for the text input for the model to perform well? More generally, based on the types of images and texts in the training data, CogLM may perform better for certain kinds of images and texts. This insight can greatly help the future use of this pretrained model.\n\nThis applies to images as well. The fact that this paper only considers square images (based on N^2 notation) is not explicitly addressed.\n",
" This paper proposes a faster and better text-to-image generation model called CogView2. Compared to the CogView baseline, main contributions are: 1) hierarchical generation that upsamples the original low-resolution image tokens and then refine them, 2) customized CUDA kernel that speeds up training, 3) a special attention masking strategy used in CogLM during training. Experiments clearly demonstrate that CogView2 generates better images than CogView both in terms of FID scores and also in terms of human evaluation. \n Strength:\n\nThe paper is mostly easy to follow. Authors’ proposed idea of using super-resolution and refinement modules to hierarchically generate higher resolution images is intuitive and works well in practice. Designing these two modules is clearly non-trivial work. Authors further demonstrate that clustering sampling is better than simple top-k. I also appreciate that they are willing to write a customized cuda kernel to speed up training. The speed-up seems to be significant in the autoregressive case.\n\nWeakness:\n\nModel improvement is limited compared to CogView. Both use Transformer to jointly learn the likelihood of text tokens and image tokens. The main difference is the new masking strategy in CogLM where tokens inside the mask region are trained to predict the next token based on past masked-tokens and non-masked context tokens. Authors stated that this approach unifies autoregressive generation and bidirectional prediction, but I find this design lacking justification. Experiment section also fails to provide any ablation study to support the choice of this particular masking strategy. \n\nEven with the new hierarchical design, CogView2 still generates image resolution lower than other works such as DALL-E-2 [1] and Imagen [2]. Stacking another direct/iterative super-resolution module is straightforward and should solve the issue. Resource limitation is indeed a problem but still I need to point out the lower resolution as part of the weakness. \n\nOnly a few hand-picked visualizations are provided in the main paper and in the supplement. This makes it very hard to qualitatively judge the performance. \n\n[1] Ramesh, Aditya, et al. \"Hierarchical text-conditional image generation with clip latents.\" arXiv preprint (2022).\n\n[2] Saharia, Chitwan, et al. \"Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding.\" arXiv preprint (2022).\n (1) It will be great if authors can provide more visualizations, especially those with higher resolution. I can not judge the quality of generated images from just a few hand-picked ones. \n\n(2) What is FID-k in Table 1? I assume it means radius of the Gaussian Filter but couldn’t find any explanation in the text. If my guess is correct then shouldn’t FID–0 be the most important number? Does this mean that CogView2 is worse than multiple baseline methods in terms of image quality?\n\n(3) As stated before, I have concerns about the masking strategy. I do not see a motivation for joint learning of autoregressive generation and bidirectional mask prediction. Authors are encouraged to provide more ablation studies to support the design of CogLM and explain why it is better than the training in CogView. \n\n(4) For human quality evaluation, I wonder why authors do not include DALL-E as part of the test? That should be a key baseline.\n\n Limitations and potential negative societal impact are adequately addressed by the authors.\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
4
] | [
"Iv_AWJPjeo8",
"A_tdGDBj_0u",
"A_tdGDBj_0u",
"ag5GOZXc_5P",
"SR9o2rOH40",
"IS-KPcyimMM",
"IS-KPcyimMM",
"nips_2022_GkDbQb6qu_r",
"nips_2022_GkDbQb6qu_r",
"nips_2022_GkDbQb6qu_r",
"nips_2022_GkDbQb6qu_r"
] |
nips_2022_nxw9_ny7_H | Deep invariant networks with differentiable augmentation layers | Designing learning systems which are invariant to certain data transformations is critical in machine learning. Practitioners can typically enforce a desired invariance on the trained model through the choice of a network architecture, e.g. using convolutions for translations, or using data augmentation. Yet, enforcing true invariance in the network can be difficult, and data invariances are not always known a piori. State-of-the-art methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems, which are complex to solve and often computationally demanding. In this work we investigate new ways of learning invariances only from the training data. Using learnable augmentation layers built directly in the network, we demonstrate that our method is very versatile. It can incorporate any type of differentiable augmentation and be applied to a broad class of learning problems beyond computer vision. We provide empirical evidence showing that our approach is easier and faster to train than modern automatic data augmentation techniques based on bilevel optimization, while achieving comparable results. Experiments show that while the invariances transferred to a model through automatic data augmentation are limited by the model expressivity, the invariance yielded by our approach is insensitive to it by design. | Accept | The decision for this paper was a hard one. I pondered the scores with respect to the engagement of the different reviewers. I believe the initial scores were due to a misunderstanding of the limitations of the baseline model Augerino, and how the proposed method solves some of the failures and limitations of Augerino (e.g. being able to model only affine transformation). I also find the authors expanded their experiments in a convincing manner during the rebuttal period. We encourage the authors to *improve the clarity of their contributions* in their final version, and to include all additional experiments that were ran during the rebuttal period.
| val | [
"f7ThyXEWYm",
"6Ptp6AnIuM1",
"1zXiE_CvF9Y",
"7-o57NZ5Qo_",
"MLqDP2DFj4",
"YR_41m6nH-z",
"7dFMEnqkUQF",
"9ntBV2MYmi",
"zMDfPT6MCG",
"ed7pyM0zZ1i",
"NqFRBjvhwTM",
"madgNlDgfv",
"fUCM7hc1zRAy",
"h_iOuUbGCaW",
"TVL0YmPskKA",
"plJubC1tAQ5",
"FTL99AJUi5v",
"B85nwj8yqQW"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their new comments and are glad that our new experiments and revised manuscript helped to convince of the relevance of our contribution. Please note however that your **rating has not been improved yet**.\n\nConcerning the caption of Figure 3, we now better understand what was the problem. Indeed, we agree that this is not the correct place to comment on the failing of $\\mu$ regularization, and **will postpone it to Figure 5 in the revised manuscript**.\n\nConcerning the results on CIFAR10, note that **fixed augmentations (and AugNet) beat AutoAugment and RandAugment** official implementations. This shows that it is not easy to beat the manually crafted augmentations, which is probably why **AutoAugment and RandAugment papers only add their policies on top of the fixed augmentations**, which we don't do here. We hence still believe that showing we can **learn augmentations from scratch without prior knowledge and achieve comparable performance to fixed augmentations is a valuable contribution**.\n\nConcerning the large error bars in Figure B.10 for the MASS experiment, these are due to the realistic setting in which we placed ourselves. Indeed, we repeated the experiment using a cross-validation scheme where the split separating testing and training subjects changes for each run. This is important because **(i) of the high inter-subject variability characterizing this type of data modality** and **(ii) because generalizing to new subjects is essential in medical applications**. This high variability is hence transferred to the performances of all models in this experiment. On the contrary, the CIFAR10 experiment leads to small variances **for these same methods**, as it uses a common test set shared across all runs for evaluation. While we could follow a similar scheme to reduce the variance in the sleep stage classification experiment, it would not be useful for real applications and could even lead to misleading conclusions. Nonetheless, we agree with the reviewer that it is difficult to interpret the statistical significance of our results in a setting where inter-fold variance is larger than the methods’ deltas. We hence **revised figure B.10, by computing per-fold improvements of each method compared to ADDA performance**. This allows us to compare the methods fold-wise, **leading to more statistically significant results**. For a training time budget of 2h, we obtain indeed that **AugNet beats ADDA on 4 out of 5 folds, with a median accuracy improvement of 0.94%, a first quartile q1=0.60% > 0 and last quartile q3=1.51%.** The corresponding figure will be added to the supplementary material.",
" Thank you for conducting experiments, updating the paper, and addressing my concerns. I believe that the paper has improved overall hence I will increase my rating.\n\nPlease see some additional comments below.\n\nUnless I’m reading it wrong, Figure 3a does show that AugNet with mu regularization (in green) indeed learns an angle very close to the true angle (with larger errors than mu-omega regularization). Later Figure 5 shows that mu regularization fails because it learned the wrong weight instead of the wrong angle. It is not fair to say in Figure 3’s caption that mu regularization fails as one can not draw that conclusion from Figure 3.\n\nWhile I agree that data augmentation tend to work better when training data is scarce, larger datasets/tasks still benefit greatly from data augmentation as shown by papers such as Copy-Paste augmentation, AutoAugment and RandAugment. Limiting validation to small datasets is a valid choice, but doing so also significantly limits its practical usefulness across dataset scales.\n\nWhile new Cifar experiments are added and performance are improved, the quantitative empirical performance on the non-toyish datasets such as Cifar and MASS (Figure 6) is not very strong as it roughly matches \"fixed augmentation\" in the former and ADDA in the latter. As many of the error bars in Figure B.10 (aka Figure 6b with error bars) overlap, and the effect size is small, it seems that many methods perform similarly as many of the differences may not be statistically significant.\n",
" We thank the reviewer for reading our response and engaging in a discussion.\n\n**Concerning point (2),** please note that the **whole automatic data augmentation literature until Faster AutoAugment** [4] has been concerned only by discrete search approaches which do not require differentiable augmentations [2, 3, 6, 8]. These approaches **cannot make use of gradient-based learning (and can hence not be learned end-to-end),** making them less efficient than more modern approaches [4, 9, 10]. Furthermore, as explained to other reviewers who raised this concern, **differentiable augmentations is not a strong requirement**. Indeed, standard techniques can be used to make basically any augmentation operation differentiable (see e.g. answer to reviewer 1), and the corresponding additional **overhead is non-existent** for researchers and practitioners as long as **open-source implementations** are made available. Computer vision implementations have already been made available by Faster AutoAugment authors (as well as Kornia library), and EEG implementations will be made available upon the publication of our article. **This discussion has been added to the conclusion of our revised manuscript.**\n\n**Concerning point (1),** making an experiment showing the number of training examples vs performance boost with our and other augmentation approaches would only demonstrate that DA helps more in lower data regimes. This is a **well-known result** demonstrated for example in [10] and is inherent to all types of data augmentation, as pointed out by reviewer 4 (GmuX). Hence **we can’t see how this additional well-known experiment could help us better understand the benefits of our method.**\n",
" Thank you to the authors for their thoughtful response to feedback.\n\nAfter answering some questions and adding additional baselines, I believe the paper has improved and have updated my score. There are still, however, outstanding concerns raised by other reviewers. To further improve the paper, the authors may consider (1) scaling AugNet to larger datasets or performing further analysis (e.g., test accuracy with # training sample compared to other augmentation methods) if it mainly provides benefit in small datasets and (2) suggesting techniques to learn invariances without having to specify them as differentiable augmentations beforehand (especially useful outside of CV datasets).",
" We would like to thank the reviewers for their remarks and suggestions. Please note that the manuscript was updated with new figures and changes highlighted. We have answered in details to each reviewer in separate replies, but are summarizing here the main points of our rebuttal to facilitate the work of the AC:\n\n- Reviewer 1 (Km4K) mainly criticizes the fact that we cannot learn all invariances of a dataset, nor its ‘’true distributions of invariance’’. We believe there is a misunderstanding since **we don’t do these claims in the paper** (cf answer to reviewer for more details).\n- Reviewers 1 (Km4K) and 3 (kEKu) criticize the fact that our method **requires a pool $\\mathcal{T}$ of candidate augmentations**, but this is not specific to our work and is the **general framework of nearly all automatic data augmentation papers** [1-8].\n- Reviewers 2 (7J9r) and 3 (kEKu) challenge the **novelty of AugNet compared to Augerino** [1]. To summarize:\n1. Augerino can only sample affine transformations while **we can sample any differentiable augmentation**;\n2. Hence, **Augerino’s scope is limited to applications such as computer vision**, while AugNet has a large scope;\n3. Thanks to our new regularizer and new modular architecture in layers, AugNet can learn **sequences of augmentations** in an optimal order (like employed in [2;3;6]), which was **not possible with Augerino**;\n4. Lastly, we have shown in our CIFAR10 experiment of section 5.1 that **AugNet outperforms Augerino**.\n- **Reviewer 3 (kEKu) seems to have missed the point 2 above**, which is why they criticized us for not comparing to Augerino in experiments which are out of its scope (sinusoids and EEG experiments of sections 4.2, 4.3 and 5.2).\n- Following reviewers 1 (Km4K), 3 (kEKu) and 4 (GmuX) suggestions, we have now **significantly improved our experiments** (fig 6a). We have indeed **increased AugNet’s accuracy** on our CIFAR10 experiment by simply increasing the number of augmentations sampled $C$ and have **added two new strong baselines** to it (AutoAugment [2] and RandAugment [4]), both outperformed by AugNet. We have also **added four new sensitivity analyses on AugNet’s hyperparameters** $C$ and $\\lambda$ and extended our experiment on model invariance to the larger CIFAR10 dataset (section B.2). The latter clarifies why it is ok to have magnitudes dropping to 0 (reviewer 4 GmuX).\n- Concerning the extension of our experimental results in general, our manuscript now presents **10 different experiments** (2 benchmarks, 3 qualitative analysis of learned invariance, 1 ablation study, 3 hyperparameter sensitivity analysis, 1 invariance vs model capacity analysis) in a total of **4 datasets** (2 synthetic and 2 real) with **2 different data types** (images and signals), **23 different augmentation operations**, and a total of **7 strong SOTA baselines**. We don’t focus on very large computer vision datasets such as ImageNet (reviewer 3 and 4) because we believe automatic data augmentation is all-the-more important for applications where data is scarce (e.g. neuroscience and other medical applications), which is the scope of this work.\n- Furthermore, as advised by most of the reviewers, **we have added a new section discussing the main limitations of our work** (need for a pool of candidate augmentations, limitation to augmentations amenable to differentiable relaxation and AugNet’s trade-off between performance and inference time).\n\n[1] Benton et al. (2020) Learning Invariances in Neural Networks\n\n[2] Cubuk et al. (2019) AutoAugment\n\n[3] Lim et al (2019) Fast AutoAugment\n\n[4] Cubuk et al. (2020) RandAugment\n\n[5] Hataya et al. (2020) Faster AutoAugment\n[6] Li et al. (2020) DADA\n[7] Ho et al. (2019) Population Based Augmentation\n[8] Rommel et al. (2022) CADDA\n",
" 4.1 **We don’t agree with the reviewer statement that the baseline augmentation corresponds to a cost of 0 GPU hours.** It is important to remember that CIFAR10 is a well-studied benchmark dataset, for which we know good performing augmentations, and that this does not correspond to real life use-cases of deep learning where one needs to learn on a fresh dataset. **Consider the case where we are not working with CIFAR10 but rather with data from a real-world application for which augmentations have not been explored as extensively** as for computer vision benchmarking datasets (like sleep stage classification for example). In these cases, we usually have augmentation candidates (because it is a known modality), but we have no idea of which one will work best given the data, model and task being solved. Our results on sections 5.1 and 5.2 show that **our method allows to train a model end-to-end and obtain a performance comparable to what we would get after trying many augmentation combinations manually**. It also shows now that it can **outperform AutoAugment trained for thousands of hours**. \n\n4.2 Also related to the last point and answering your second remark, note that we have improved the performance of AugNet by increasing $C$ from 4 to 20. We have done the same to the Augerino baseline to ensure a fair comparison. This brought us to a performance comparable to the baseline with fixed augmentations (AugNet: 93.2 +- 0.4, Fixed augmentation: 93.6 +- 0.2).\n\n4.3 As advised by the reviewer, **we have added RandAugment [3] and AutoAugment [4] baselines to the CIFAR10 experiment, which are outperformed by AugNet** (RandAugment: 92.4 +- 0.2, AutoAugment: 92.0 +- 0.1). We used the pytorch implementations as advised by the reviewer. Note that AutoAugment was trained over 5000 GPU hours according to [4] + 3h of training, while our method was trained end-to-end in 4h.\n\n4.4 **We don’t agree that selecting magnitude zero at one of the layers after 60 epochs in figure B.5 and B.6 constitutes a failure**. First of all, please note that the model and the augmentations are trained together and that it is known since PBA [6] and RandAugment [3] that **the best augmentation to use depends on the training stage**. Our results just show that after 60 epochs, the translate-y and brightness augmentations start to be less relevant (the convolutional network probably already learned the invariance within its weights, not needing the augmentation anymore). To justify our claim, **we have added two new figures (B9 a and b)** where we have evaluated the invariance of AugNet to the learned transformations. The plots correspond to the same trainings depicted on figures B.5 and B.6 and **show that AugNet remains invariant to the transformations whose magnitude dropped to 0.**\n\n4.5 Concerning the saturated magnitudes of figures B.5 and B.6, **note that experiments from sections 4.1 and 4.2 demonstrate that AugNet does not always learn binary transformations**. But of course, nothing prevents it from learning a magnitude of 1. or 0. In fact, the value of the learned magnitude depends on how the augmentation is parametrized, which is not particular to AugNet but rather a shared feature of nearly all automatic data augmentation methods [1-7].\n\n5. We have added to the supplementary a new version of **figure 6b with error bars** corresponding to 75% confidence intervals. We kept the original in the main paper since it is more readable, but added a note referencing the new version of it.\n\n6. As advised, we have also **added a section discussing the limitations of our work** in further details, including:\n- the limitation to differentiable or relaxable augmentations\n- the need for a set of candidate augmentations\n- the trade-off between inference time and performance\n\n[1] Hataya et al. (2020) Faster AutoAugment\n\n[2] Li et al. (2020) DADA\n\n[3] Cubuk et al. (2020) RandAugment\n\n[4] Cubuk et al. (2019) AutoAugment\n\n[5] Lim et al (2019) Fast AutoAugment\n\n[6] Ho et al. (2019) Population Based Augmentation\n\n[7] Rommel et al. (2022) CADDA",
" We would like to thank the reviewer for its careful and detailed review and its valuable suggestions to improve our experimental results. Also, we appreciate that the reviewer thinks we ‘’provide a **good review of related work**’’ and for finding our ‘’work **well motivated** and [with] **potential to make an impact**’’. We hope that the detailed answers below, and that **our new and more convincing results on CIFAR10** will satisfy the reviewer.\n\nThe reviewer is mainly concerned by the experimental part of the work, and we address their remarks in order:\n\n1. The reviewer found that we only evaluate the approach with small datasets, while data augmentations ‘’work better when training data is limited and its performance boost may diminish when the training data is abundant’’. We agree with the reviewer that data augmentation is **specially useful in low-data regimes in practice**, which is **precisely the scope of our work and the reason why we don’t experiment in very large computer vision datasets** such as ImageNet. Indeed, while nearly all automatic data augmentation papers only demonstrate their approaches on large image datasets, **many important applications of DL other than computer vision are limited today because of the scarcity of labeled data**. We are convinced that it is precisely in this regime that automatic data augmentation techniques have the most potential impact. This is the case for example in medicine or neuroscience, in which datasets are relatively small and data augmentations are considerably less intuitive and not as well-studied as for image-based applications. We hence believe that it is important to demonstrate automatic data augmentation in new scopes. Also note that this work **involved reimplementing the existing methods, whose official codes are also tailored to image data. We believe these comparisons and new implementations (to be open-sourced under publication) will be beneficial for the community.**\n\n2. Concerning the low capacity of the models considered in the sinusoids experiment of section 4.3, the latter was precisely designed to illustrate that the performance of data augmentation is limited by the capacity of the architecture being trained, unlike AugNet. For this, **we had to decrease model capacity until it could not hard-code the ground-truth invariance anymore**, which was quite low since we picked a simple interpretable invariance. This experiment had to be cast with synthetic data because we needed to know the ground-truth invariance to compute the appropriate metric and derive the oracle augmentation. **Furthermore, we have added to section B.2 a new similar analysis with a larger dataset and model (CIFAR10) showing exactly the same phenomenon as in toy experiment 4.3.**\n\n3.1 Concerning experiment 4.1, we are not sure to understand the reviewer’s comment: ‘’both mu regularization and mu-omega regularization work well’’. Please note that **the $\\mu$ regularization proposed in Augerino does not work for AugNet’s new architecture**, which learns the identity mapping and is hence not invariant to rotations between $\\pm \\pi/4$. Indeed, as explained in lines 228-233, AugNet’s weights $w$ select the translate-X transform and sets its magnitude to 0 (fig.4 bottom) when using the old regularization. \n\n3.2 We have added a sensitivity analysis of the performance on the parameter $\\lambda$ in section B.2.\n\n3.3 We have added a sensitivity analysis to the parameter $C$ in section B.2 for both the sinusoids and CIFAR10 datasets. We confirm the intuition that increasing $C$ increases the invariance of the trained model to the set of learned transformations and that this has an impact on the model performance. We have also added an empirical analysis of the time complexity vs $C$, showing that increasing $C$ also has additional cost in inference time which increases linearly with $C$. A discussion on the trade-off between performance and time complexity of our approach has been added to our new limitations section.",
" ## Point C)\n\nConcerning point C, please note that we have made a comparison on CIFAR10 (section 5.1) and that we have **added the baseline RandAugment [2]** as requested. We have also **added AutoAugment [3]** to this same experiment, which was already benchmarked on the EEG experiment of section 5.2, just as Faster AutoAugment [4]. Both new baselines are shown to be outperformed by AugNet on CIFAR10.\nHence, after following the suggestions of all 4 reviewers, our manuscript presents **10 different experiments** (2 benchmarks, 3 qualitative analysis of learned invariance, 1 ablation study, 3 hyperparameter sensitivity analysis, 1 invariance vs model capacity analysis) in a total of **4 datasets** (2 synthetic and 2 real) with **2 different data types** (images and signals), **23 different augmentation operations**, and a total of **7 strong SOTA baselines**, which is why other reviewers (7J9r) found that we have a ‘’great amount’’ of experiments which ‘’demonstrate the effectiveness of the proposed methods’’.\nNow, concerning new experiments with larger computer vision datasets, we would like to stress that our main contribution and motivation are to democratize end-to-end automatic data augmentation beyond this application of ML and in particular for settings where data augmentation is the most useful, which is when the sample size is not very large. Indeed, while nearly all automatic data augmentation papers only demonstrate their approaches on computer vision examples [1-7], **there are many other fields of application where automatic data augmentation is crucial**. In medical applications of AI, such as the analysis of EEG recordings for example, labeled data is very scarce with datasets orders of magnitude smaller than Imagenet. It is also application fields where existing data augmentations are considerably less intuitive and well-studied than those for images. **Developing new automatic data augmentation that work with more varied types of data and that can boost performance on smaller datasets is the ambition of AugNet. We hence don’t believe that adding new computer vision experiments with larger datasets would help to demonstrate the practical usefulness of our contribution.** We will add a sentence to the discussion to insist on this point.\n\n## Point D)\n\nFirst of all, please note that 2 out of 6 references suggested by the reviewer **were already cited in our paper**:\n[5] is cited in line 85 of our related work section,\nFaster AutoAugment [4] is cited in line 69 of our related work section and benchmarked in section 5.2.\nConcerning Cutout, it is not an automatic data augmentation approach (nothing is learned) but is rather just a data augmentation operation, which is why it is not cited. Other references suggested **[6-7] will be added to section 2**.\n\n## Point E)\n\nThe reason why learning one augmentation per layer is desirable is that **augmentations are often applied sequentially** [2, 3, 4, 6]. Furthermore, this design allows to **learn the correct order of transformations**, which don’t always commute (shearing > rotating is not equivalent to rotating > shearing). Also note that if our layers converged to a non-sparse weighted sum of transformations rather than a unique transformation, they would be hard to interpret. Indeed, **while augmentations usually encode known factors of variation of the data, their mixture does not**. We invite the reviewer see **picture A.3 added to the supplementary comparing an image transformed with a sum of augmentations to another augmented with a sequence of augmentations. While the sum of augmentations lead to a ‘’blob’’ like image, sequences lead to realistic examples.**\n\n## Point F)\n\nWe agree with the reviewer that our method is limited to scenarios where some set of candidate augmentations $\\mathcal{T}$ is known. But this is **not a limitation specific to our work, but rather shared by the whole automatic data augmentation field** which is based on this same assumption [1-4; 6-10]. A common real-world scenario in which these assumptions are valid are when we are dealing with a known data modality (images, audio, EEG signals, …) for which data augmentations exist, but we don’t know which one will perform best given our dataset, model and task at hand. Indeed, it is known (cf.AutoAugment [3]) that the best augmentation depends on all these aspects. This is hence **not a strong assumption for practical use**.\n\n\n[1] Benton et al. (2020) Learning Invariances in Neural Networks\n\n[2] Cubuk et al. (2020) RandAugment\n\n[3] Cubuk et al. (2019) AutoAugment\n\n[4] Hataya et al. (2020) Faster AutoAugment\n\n[5] van der Wilk et al. (2018), Learning Invariances using the Marginal Likelihood\n\n[6] Zhang et al. (2019), Adversarial AutoAugment\n\n[7] Hendrycks et al. (2020), AugMix\n\n[8] Lim et al (2019) Fast AutoAugment\n\n[9] Li et al. (2020) DADA\n\n[10] Rommel et al. (2022) CADDA",
" We would like to thank the reviewer for their remark. We are glad that they found our manuscript ‘’well written’’ and appreciated our synthetic examples for showing ‘’the ability [of our method] to uncover ground truth invariances’’. We would like to thank the reviewer for suggesting more ambitious comparisons with the state-of-the-art. We hope that the answers below and the additional baselines added to CIFAR10 will convince the reviewer of the relevance of this contribution.\n\nThe main concerns of the reviewer seem to relate to\n\nA) the novelty of our method compared to Augerino [1],\n\nB) the reason why we don’t compare to it on both the sinusoids experiment with synthetic data and in the experiment of section 5.2 with EEG data,\n\nC) comparisons on computer vision datasets to other automatic data augmentation techniques and larger datasets such as ‘’CIFAR10 and ImageNet’’.\n\nLess critically,\n\nD) they suggest a few references to be cited.\n\nThe reviewer also asks two other questions, which are:\n\nE) why selecting a single augmentation per layer is a good property,\n\nF) and whether it would be possible not to rely on a pre-specified set of transformations.\n\n## Points A)\n\nConcerning the novelty of AugNet compared to Augerino, while both share the same architectural idea of sampling augmentations and averaging the model outputs, please note that:\n**Augerino can only sample affine transformations**, because its augmentation module relies on the assumption that the group of augmentations have a Lie structure (when looking in details at the code of the augmentation samplers of AugNet and Augerino one can see that they are quite different. See code in supplementary material);\nBecause of the last point, **Augerino’s scope is limited to applications such as computer vision**, where these augmentations make sense (rotations, translations, shearing and scaling);\n**AugNet’s augmentation module** has a very different modular architecture based on layers, which **allows it to learn sequences of augmentations (like employed in [2;6])** in an optimal order, which was not possible with Augerino;\nLearning sequences like in [3;6] is possible because each layer of AugNet is capable of **selecting augmentations** thanks the new regularizer proposed. Again Augerino samples from a mixture while we obtain sequences of selected augmentations.\nLastly, we have shown in our CIFAR10 experiment of section 5.1 that **AugNet does outperform Augerino**.\n\n## Point B)\n\nPoints A and B are strongly related, the former being probably the reason why the reviewer did not understand why Augerino is not compared to AugNet on all experiments. **The reason why we don’t compare to Augerino on the sinusoids experiments of sections 4.2, 4.3 and the EEG experiment of section 5.2 is that these data types are out-of-scope for Augerino. This is explained in the paper in many places**, such as in:\n- l.92-95, section 2\n- l.235-236, section 4.2\n- l.319-320, section 5.2\n- l.45-46, section 1\n\nOur work hence **copes with a major limitation of Augerino and extends their ideas to a vastly broader range of applications**, which is one of our main contributions. This is a significant contribution since the ability to learn data augmentations to deal with small datasets is a game-changer in fields such as medical applications of AI where affine augmentations are too limiting.",
" [1] Benton et al. (2020) Learning Invariances in Neural Networks\n\n[2] Cubuk et al. (2019) AutoAugment\n\n[3] Lim et al (2019) Fast AutoAugment\n\n[4] Cubuk et al. (2020) RandAugment\n\n[5] Hataya et al. (2020) Faster AutoAugment\n\n[6] Li et al. (2020) DADA\n\n[7] Rommel et al. (2022) CADDA\n",
" We would like to thank the reviewer for their remarks. We are glad they found our experiments to be in ‘’great amount’’ and ‘’well designed to demonstrate the effectiveness of the proposed methods’’.\n\nThe main concern raised by the reviewer relates to its novelty compared to the Augerino model proposed in [1]. Before going into detailed answers we would like to stress that Augerino was indeed inspirational for our work, and we recognize its clear scientific merit. Yet, we would like to argue in what follows that with AugNet we offer a clear novel contribution.\n\n1. First of all, we agree that Augerino and AugNet share the same architectural invariance-promoting idea of sampling $C$ augmentations, transformation copies of the input with them and averaging the model outputs. However, **Augerino’s augmentation module is very different from AugNet’s: it can only sample some affine transformations** and uses generators in the Lie algebra associated to this group. Not only is this structure restrictive (requires a Lie group architecture and to know generators), its scope is mainly restricted to “spatial” data like images or molecules positions, for which the affine augmentations considered make sense. **AugNet’s augmentation module is more general purpose since it does not rely on Lie group structures, allowing it to sample and select non-affine transformations and hence be applied to a broader class of problems**, as demonstrated in experiments 4.2, 4.3 and 5.2.\n\n2. Furthermore, while **Augerino’s augmentation module can only sample from the joint distribution** of affine transformations considered (e.g. translate-x, translate-y, 2D-rotation, shearing and scaling according to sections 3.2 and B of [1]), our augmentation module has a modular architecture made of stacked layers. As each layer only selects one augmentation (c.f. section 4.4), **AugNet allows to learn sequences of augmentation which was not proposed by Augerino**. This is not only more in line with how data augmentation is done in practice [2-5], it also allows AugNet to learn the best order of augmentations, as demonstrated in experiments of section 5. This cannot be done with Augerino.\n\n3. Concerning the regularization, we also cannot agree that our design consists in simply ‘’injecting the [...] learnable weight parameters to the regularizer proposed in Augerino’’. Given that we have a new learnable set of parameters in AugNet, it was not obvious how to add it to the regularization. We tried other combinations like the norm of the sum $\\|\\mu + w\\|$ and sum of norms \\|\\mu\\| + \\|w\\|, but those do not work as well as the element-wise product proposed in our manuscript. Moreover, we provide a mathematical analysis of the two beneficial properties of our regularizer in sections 4.4, and an ablation study in section 4.1. This explains and empirically demonstrates the selective property of our regularization, which Augerino’s does not have. As a case in point, we have added to the supplementary a figure presenting the magnitudes learned by Augerino on the Mario&Iggy experiment of section 4.1 (figure A.2). We see that **Augerino’s magnitudes are not sparse and hence that it is learning a ‘’mixture’’ of augmentations difficult to interpret, while AugNet selects a single transformation per layer thanks to its regularizer** (fig. 4 top).\n4. Concerning the use of automatic approaches like [2-7] to learn augmentations beyond computer vision, they can in principle be applied to other data modalities, as shown in experiment of section 5.2 of the manuscript. Our main contribution regarding these approaches is to be **more efficient than discrete search approaches** [2] and **trainable end-to-end**, without the need of solving bilevel optimization problems as in [5-7]. Moreover, note that while methods [2-6] can be applied to other data modalities, **the fact is that none of these papers shows any experiments beyond computer vision**. Demonstrating these techniques in more varied contexts is hence another contribution of our work and we hope it will inspire future works with broader scopes.\n\n5. The reviewer also asked us to explain the sentence ‘’Fortunately, even when $C$ is small, $\\tilde{f}$ is an unbiased estimator of $\\bar{f}$’’.\nThe reason is that the empirical mean $\\frac{1}{N}\\sum_{i=1}^N Z_i$ built using i.i.d. observations $Z_i \\sim Z$ is an unbiased estimator of $\\mathbb{E}(Z)$ regardless of $N$. In our case, $N=C$, $Z_i = f(g_i x)$ with $g_i \\sim \\nu_G$ and $\\mathbb{E}(Z)=\\mathbb{E}_{g \\sim \\nu_G}(f(gx))=\\bar{f}$, where f and x are fixed for a given iteration.\nNow increasing the value of C helps to reduce the variance of the prediction, as shown now in figure 5.1 where using $C=20$ instead of $C=4$ leads to much improved results on CIFAR10. **We have also added a new analysis of the impact of $C$ on performance (and time) to section B.2**\n",
" Point B)\n---------\n\nThe reviewer found the motivation of our new data augmentation approach unclear because we don’t improve over all baselines. To clarify, our motivation is threefold:\n1. Allow to learn useful data invariances and incorporate them in predictive models **without the need for manual search**,\n2. Simplify automatic data augmentation with a new gradient-based method trainable end-to-end, without the need for complex bilevel optimization,\n3. Extend previous work which allows to learn invariances end-to-end [8] with a more modular architecture, capable of learning a broader range of transformations (beyond the Lie group of affine symmetries). This last point will allow fields like medical applications of AI to have access to these techniques which are mostly reserved for computer vision.\n\nThe reviewer mentions the results of the CIFAR10 experiment of section 5.1, where we cannot beat the fixed augmentations baseline. First of all, the gap between AugNet and the fixed augmentation baseline has been significantly reduced by increasing the number of copies $C$ in AugNet, as explained in our reply to reviewer GmuX. The two methods lead now to equivalent performance (AugNet: 93.2 +- 0.4, Fixed augmentation: 93.6 +- 0.2).\n\nFurthermore, note that while our augmentation technique does not lead to superior accuracy than this baseline, it allows to obtain the same level of performance without any prior knowledge of what augmentation will work. Indeed, it has been shown since AutoAugment [1] that the best augmentation to use highly depends on the dataset, model and task being solved. Hence, **consider the case where we are not working on CIFAR10**, for which we know which augmentations work best, **but are rather working with data from a real-world application for which augmentations have not been explored as extensively** as for computer vision benchmarking datasets (like sleep stage classification for example). In these cases, we usually have augmentation candidates (because it is a known modality for which augmentations have been proposed in the literature), but **we have no idea of which one will work best given the data, model and task being solved**. Our results on sections 5.1 and 5.2 show that our method allows to train a model end-to-end and obtain a performance comparable to what we would get after trying many augmentations combinations manually. It also shows now that it can outperform AutoAugment trained for thousands of hours, as well as methods based on complex bilevel optimization with gradient-based automatic data augmentation methods such as Faster AutoAugment [4] or DADA [5]. In that sense, experiment 5.1 on CIFAR10 demonstrates that AugNet is actually a very competitive method (motivation 1), while experiment 5.2 on EEG MASS dataset makes a clear case that AugNet is a promising technique beyond computer vision problems (motivations 2 and 3). This has been made clearer in sections 5.1. \n\nConcerning robustness to adversarial attacks: this is out of the scope of our study, but could be an interesting direction for future work.\n\nPoint C:\n---------\n\nWe thank the reviewer for suggesting this experiment, which indeed strengthens the analysis of our approach. We have included a graph comparing inference times and accuracies with varying C in section B.2. We observe that larger values of $C$ yield better performances.\nHowever, increasing the number of copies $C$ at inference also comes with a computation time that increases linearly.\n\n\n\n[1] Cubuk et al. (2019) AutoAugment\n\n[2] Lim et al (2019) Fast AutoAugment\n\n[3] Cubuk et al. (2020) RandAugment\n\n[4] Hataya et al. (2020) Faster AutoAugment\n\n[5] Li et al. (2020) DADA\n\n[6] Ho et al. (2019) Population Based Augmentation\n\n[7] Rommel et al. (2022) CADDA\n\n[8] Benton et al. (2020) Learning Invariances in Neural Networks",
" Still concerning point A), the reviewer also raised the concern (1b) that not all existing augmentations can be learned since some are not differentiable, giving the example of Cutout. As briefly mentioned in line 72 of our related work section, there are standard techniques to relax most non-differentiable augmentations into differentiable surrogates, as first explained in Faster AutoAugment [4] and reused in many following gradient-based automatic data augmentation works [5, 7]. We use these same relaxations in our work, **which is explained in lines 146-149 of section 3.3., as well as in section A.4 of supplementary.** Using these techniques, Cutout can be relaxed and made differentiable using for example the straight-through gradient estimator [8]. Also, as a case in point, **cutout augmentations on the time and sensors dimensions (time masking and channels dropout resp.) were learned with gradient descent in our EEG experiment of section 5.2** (see table A.4). A discussion about this has been added to our new limitations and discussion section.\n\nWe are not sure to understand the concern (1c) regarding the selection of transformations combinations. If by ‘’combination’’ the reviewer means the order in which the transformations should be applied, our method allows to learn this. Indeed, by stacking two or more augmentation layers, they will each select a different transformation at each layer, as shown on figures B.2-3 and B.5-6. An explanation of why each layer selects a single transformation is given in section 4.4.\n\nLastly, the reviewer raises a concern about learning the ‘’true distribution’’ of invariances (2a and 2b). **We believe that there is a misunderstanding regarding this point**, since we never speak about or define a ‘’**true** distribution of invariances’’. As explained in section 3.1, we only define invariances of the model $f$ regarding **sets $G$ of deterministic transformations** (c.f. definition recalled in point above). In this paper we are trying to learn the set $G$ (which is potentially continuous), using a discrete set of typical augmentations as a **tool**. These augmentations are stochastic mappings, which hence represent **user-defined parametric distributions over many transformations**, since they have hyperparameters allowing to set how strongly they can distort the input data at the boundary of the distribution support. In practice, we use uniform distributions whose bounds are parameters to be learned, which allow us to create models $\\tilde{f}$ approximately invariant to the subset $G$ defined by the union of the supports of the selected augmentations (c.f. proposition 3.1). So concerning point (2) in general, there is no such thing as ‘’true distribution of invariances’’ to be learned, and the distributions over transformations are just tools used in our model to approximate an invariant expectation.",
" We thank the reviewer for their remarks pointing to some clarifications regarding our actual contribution as well as suggesting stronger experimental results on CIFAR10. We hope that the answers below address these concerns as well as the **novel and improved experiments in CIFAR10**.\n\n\nSpecifically, the reviewer asks clarifications concerning three main points:\nA) the basis on which we claim that we learn ‘’true invariances’’,\nB) the paper motivation as a data augmentation method and its performance compared to fixed augmentations on CIFAR10\nC) the trade-off between computational complexity and accuracy of our method.\n\nPoint A)\n---------\nConcerning the first point A), the term ‘’true invariance’’ is only used on line 4 of the abstract and line 136 of section 3.3, and it refers to the invariances of the underlying data distribution. The notion of invariance is briefly defined on line 110: we say a function $f$ is invariant to a set of transformations $G$ if for any $g$ in $G$, $f(gx)=f(x)$.\n\nThe first concern (1a) raised by the reviewer here is that to learn any arbitrary true invariance one would have to enumerate all possible transformations of the data that exist. But **we never claim to learn all data invariances** and only **assume that the data is invariant to at least one transformation from our discrete pool $\\mathcal{T}$**. This is not a strong assumption and we consider here augmentations that were proved useful in some contexts. This is now made clearer in section 3.3. Furthermore, we demonstrate on synthetic examples 4.1 and 4.2 **where we know at least one invariance of the data** that our model is able to learn it without any problem, which is one of the main points of these experiments. The number of invariances AugNet can learn depends indeed on the number of layers used and on the set of possible transformations $\\mathcal{T}$. Joining what is asked by reviewer keKu, the fact that we cannot learn invariances to any arbitrary endomorphism $T:\\mathcal{X} \\to \\mathcal{X}$ and can only select transformations from a finite pool is indeed a limitation of AugNet but is not specific to our work. It is the setting used by the whole field of automatic data augmentation ([1, 2, 3, 4, 5, 6]). This limitation and a discussion have been added to the end of the manuscript.\n",
" The paper proposes a method to learn data invariance along with model training. The method avoids architectural modification and bilevel optimization, which makes it easy to use in many scenarios. The technical details are generally good and easy to follow; however, I am concerned about the motivation and theoretical foundation of the paper. My primary concern is the claim that the method can recover the *true* data invariance. It ideally would require two steps: (1) all possible types of invariance are considered in the augmentation module, and (2) the distribution is properly learned for each type of invariance. However, these two steps can hardly be satisfied. For the first step, (1a) it is impossible to enumerate all types of invariance; (1b) not all types of invariance are differentiable (e.g., cutoff); (1c) the paper does not discuss how to select the invariances among all possible combinations (e.g., using validation). For the second step, (2a) the scalar amplitude itself is insufficient to characterize an unparameterized distribution, and (2b) there is no theory in the paper that the recovered distribution matches the true distribution (Indeed, the learned distribution degenerates to identical mapping without regularizer). In summary, I can hardly agree that the learned augmentation matches the *true* invariance.\n\nIt also makes the motivation of the paper unclear. One reason to use augmentation is to boost performance --- however, the proposed method still falls behind the fixed augmentation. Another reason to use augmentation is to boost robustness against invariance attack --- however, such robustness is not systematically evaluated in the paper.\n\nMy last minor concern regards computational complexity. As mentioned in the paper, the model in inference requires more than one sample per example. I wonder if the authors could provide an analysis of the tradeoff between computational complexity and model accuracy/uncertainty. Not applicable.",
" The author proposed an augmentation layer to learn data invariance from the training data. The proposed method is applicable to any differentiable transformation, and the augmentation layer can learn a weighted sum of different transformations that can best captures the data invariance. Sufficient experiments are conducted to verify the proposed arguments. Pros:\n\nThe paper proposed an idea that the data invariance can be learnt in an end to end fashion. Greatly eased the learning process comparing to bi-level optimization methods.\nGreat amount of experiments are conducted on both synthetic data and benchmark datasets. Experiments are well designed to demonstrate the effectiveness of proposed methods.\n\nCons:\nOverall, I think the paper shares very similar idea comparing to the Augerino paper. The main difference is that the proposed method applied learnable weights to multiple transformations. Thus I consider the novelty of this paper as an incremental variation of existing works.\n\nThe author proposed a selective regularizer that is different from the one used in the Augerino work. But the difference of the proposed regularizer is basically injecting the newly proposed learnable weight parameters to the regularizer proposed in Augerino, making the proposed method more like a mirror design of the existing work.\n\nThe author explored the proposed augmentation method beyond computer vision tasks and emphasized the novelty of this extension. I do not see this point as something unique for this proposed method. In fact, there shouldn’t be obstacles for other existing methods to be applied to tasks beyond computer vision. It would be great if the author can further explain on this point if I missed/misunderstand something.\n\nsome minor points:\nin line 123, the author mentioned : Fortunately, even when C is small, f˜ is an unbiased estimator of f. Can the author further explain on the reason?\nFigure 4 could be a bit hard to read, the position of each label could be a bit confusing. Might be better to reformat the figure. see Strengths And Weaknesses There is no negative social impact of this work.",
" The authors propose a module that learns the invariances (and their magnitudes) present in a dataset while training a neural network. The module applies a convex combination of augmentations before passing to a trunk network, where the outputs are aggregated. To prevent trivial augmentations (such as identity), the authors apply regularization to the magnitude predictions of their module. The authors test their module on synthetic and real-world data, comparing to other learned invariance approaches. Strengths\n* The paper is well written and clearly presents its methodology and experiments.\n* The synthetic experiment shows the ability to uncover ground truth invariances from a set, and the Mario&Iggy experiment demonstrates the benefit of the selected magnitude prediction regularization.
\n\nWeaknesses\n* The proposed approach is an incremental improvement to that proposed by (Augerino, Benton et al. 2020). Both rely on aggregating the output of a network across augmentation samples to approximate an expectation, and both require some form of regularization to prevent learning identity for the transformation magnitude. \n* The paper is limited in experimental validation. Some experiments do not adequately compare to related methods (e.g., Augerino on the EEG dataset)\n* The authors should reference (“Learning Invariances using the Marginal Likelihood,” van der Wilk et al. 2018). It seems the aggregation module proposed by the author is similar to the Bayesian approach in the mentioned work.\n* Since the authors are comparing to learned augmentation approaches, they are missing many related works that should be cited from that literature. Just a few, for example: (RandAugment, Cubuk et al. 2019), (Faster AutoAugment, Hataya et al. 2019), (Cutout, DeVries et al. 2017), (Adversarial AutoAugment, Zhang et al. 2019), (AugMix, Hendrycks et al. 2020). The authors should at least include a comparison to RandAugment in their experiments, such as Figure 6b.\n* Regarding test accuracy, many recent methods that have been proposed to learn an augmentation policy perform comparable or worse than the simple and computationally quick approach offered by RandAugment. Justification of learning the dataset invariances would be strengthened by also comparing to a network that implicitly learns invariances by training on RandAugment. * Why do the authors not compare to Augerino for the sinusoidal dataset?\n* Is there any way for the authors to apply their method on a dataset where a set of plausible transformations T_Q are not known beforehand? This would be closer to the problem setup of a priori learning of augmentations. \n* How does AugNet compare to related methods on a larger scale image dataset, like CIFAR-10 or ImageNet?\n* Why does AugNet learn to apply a single augmentation at each layer? Wouldn’t it be preferable to capture all invariances using fewer parameters? * The problem set-up states that the authors are interested in “data invariances [that] are not always known a priori”, but the authors still rely on pre-specifying which augmentations they will apply (denoted as a set T_Q). The propose methodology seems to be limited in scenarios where a set of transformations is not known a priori.\n* Without more experimental comparison and validation, it is difficult to assess the efficacy of the proposed method",
" The paper recognizes the difficulties of current bilevel optimization of automatic data augmentation (ADA) methods. \nThe authors propose AugNet, by adding learnable augmentation layers and an aggregation module to a base trunk model, and making network training and augmentation learning a single level end-to-end optimization problem.\nExperiments across two simulated tasks, one image classification task, and one physiological data classification task are conducted to show the empirical versatility of AugNet.\n Strengths:\n1. Learning data augmentation policies is an important topic in training generalizable deep neural networks. The authors provide a good review of related work and identify some important limitations of previous methods such as AA, FAA, and ADDA. Thus the topic and motivation of this work are well justified.\n2. Some mathematical intuitions are discussed in the paper in Section 3. Practical approximations are mentioned for Proposition 3.1.\n3. Basic experimental details such as number of epochs, learning rates, and weight decay are generally well documented, choices of hyperparameters are mostly sensible (see weaknesses for exceptions).\n4. Error estimates are provided for Figure 5 Top and Figure 6 (a).\n5. A range of synthetic and real classification tasks are tested.\n\nWeaknesses:\nMajor:\nGenerally, empirical evaluation is weak and unconvincing at several levels which makes it hard to believe the proposed method has much practical advantage over existing methods.\n\n1. Although the tasks are somewhat diverse, they are all small or toy problems. It is a well known trend that data augmentation tends to work better when training data is limited and its performance boost may diminish when the training data is abundant. It is important to empirically show how well AugNet scales to larger (both in size and complexity) datasets and tasks.\n\n2. Similar to tasks, the models used in the experiments have very low capacities (especially for Section 4.3). Not testing AugNet with larger and more realistic models would significantly limit its potential usefulness.\n\n3. For the Mario&Iggy experiments:\n\n 3.1. Qualitatively, both mu regularization and mu-omega regularization work well for Augerino and AugNet as they all roughly learn the optimal Pi/4 rotation.\n\n 3.2. It is unclear how sensitive AugNet is to lambda. Experiments with different lambda are missing.\n\n 3.3. Since C is set to 10 for Section 4.3 and 4 for other experiments, it is reasonable C is at least a somewhat important hyperparameter. \nExperiments to show how C affects performance are missing. \n\n4. The Cifar-10 experiments are very problematic in many ways. \n\n 4.1. AugNet simply performs poorly by falling behind even the base augmentation that was developed about a decade ago (from the AlexNet paper) which has 0 GPU hours of computation cost since it was handcrafted. Spending 1 extra GPU hour to learn an inferior augmentation policy is hardly a win.\n\n 4.2. The gap in top-1 accuracy between AugNet (91.9%) vs Fixed Augment (93.6%) is significant as it could correspond to a 2x or 3x model size difference, this shouldn’t be understated.\n\n 4.3. Since AugNet is supposed to improve upon methods like AA, it is natural to show results with AA and RA, since Pytorch has official implementations of searched policies on Cifar and ImageNet. These methods should significantly boost performance further.\n\n 4.4. Figure B.5 shows that layer 2 learns TraslateY but with magnitude 0, which means identity transform, which suggests the failure of the proposed regularization. The same issue can be found in Figure B.6 layer 1.\n\n 4.5. Additionally, other learned invariances with non-zero weights have magnitudes of 1, which also suggests that AugNet is only learning binary choices for both weights and magnitudes in this task. That also contradicts with what the authors propose regarding AugNet learning both weights and (continuous) magnitude.\n\n5. Most accuracies in the MASS task differ by less than 1%, it is hard to evaluate if any median trends observed are statistically significant without error bars.\n\n6. The limitations of this work are not properly discussed. \n\nMinor:\nOverall I think some math can be more concise or moved to the appendix, as this paper does not focus on proposing new theories. This will free up some space to discuss the limitations of this work and potentially more experimental evaluations of AugNet.\n While itemized suggestions are laid out in the “weaknesses” section, here’s some higher level suggestions.\n\nOverall I think this work is well motivated and has potential to make an impact if the experimental validation was much more convincing.\n\nTo me the first thing the authors could do without adding additional tasks and models is getting more solid results on existing tasks. As I mentioned in “weaknesses” error bars should be reported in all results to determine whether any effects are statistically significant, this should be the basis of experimental validation. It will also be nice to see stronger empirical performance across the existing tasks. As is, the improvements of AugNet over prior art is marginal and qualitative at best, with significant performance drop for Cifar. Important experiments such as performance vs lambda are missing.\n\nThe paper would also benefit a lot by going beyond the toyish task/model combinations. It is important to see whether AugNet generalizes to larger tasks and models for it to be more useful practically.\n The authors discussed various limitations of previous work such as the bilevel optimization nature of ADAs. While there are mentions such as how practical transforms deviate from the group assumption, the authors did not have a clear discussion of the limitations of AugNet."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"6Ptp6AnIuM1",
"YR_41m6nH-z",
"7-o57NZ5Qo_",
"9ntBV2MYmi",
"nips_2022_nxw9_ny7_H",
"7dFMEnqkUQF",
"B85nwj8yqQW",
"zMDfPT6MCG",
"FTL99AJUi5v",
"NqFRBjvhwTM",
"plJubC1tAQ5",
"fUCM7hc1zRAy",
"h_iOuUbGCaW",
"TVL0YmPskKA",
"nips_2022_nxw9_ny7_H",
"nips_2022_nxw9_ny7_H",
"nips_2022_nxw9_ny7_H",
"nips_2022_nxw9_ny7_H"
] |
nips_2022_7HTEHRMlxYH | FNeVR: Neural Volume Rendering for Face Animation | Face animation, one of the hottest topics in computer vision, has achieved a promising performance with the help of generative models. However, it remains a critical challenge to generate identity preserving and photo-realistic images due to the sophisticated motion deformation and complex facial detail modeling. To address these problems, we propose a Face Neural Volume Rendering (FNeVR) network to fully explore the potential of 2D motion warping and 3D volume rendering in a unified framework. In FNeVR, we design a 3D Face Volume Rendering (FVR) module to enhance the facial details for image rendering. Specifically, we first extract 3D information with a well designed architecture, and then introduce an orthogonal adaptive ray-sampling module for efficient rendering. We also design a lightweight pose editor, enabling FNeVR to edit the facial pose in a simple yet effective way. Extensive experiments show that our FNeVR obtains the best overall quality and performance on widely used talking-head benchmarks. | Accept | After rebuttal all reviewers recommend acceptance. The authors are encouraged to follow the reviewer suggestions on improving the final paper. | train | [
"_Axbg_KoyY",
"A7eID1fsZrm",
"Ww2bkBrP-2G",
"WnBGLJ01vED",
"dYgWHvNcc2f",
"A8TCWiFLueP",
"U8DpX55lAJt",
"VoqUk3p7olg",
"JxgB5ywhRGi",
"7YWNkfM5yx",
"1LVLhWb5sy2"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We will definitely do it. Thank you again for your supportive comment.",
" Thank the authors for providing such a comprehensive reponse to all my questions. The provided video is much better than the one before. Please try involving the provided details and changes to the revision and supplementary accordingly. ",
" We sincerely appreciate your comprehensive review and constructive comments. We will address your concerns point by point below, hoping that it helps you find the significance of this work.\n\n### Q1: Orthogonal Adaptive Ray-Sampling.\n\n**A1:**\n\n+ First of all, it should be noted that although both our model and NeRF use volume rendering, the purposes and implementations are different. In terms of purpose, NeRF combines neural field and volume rendering for synthesizing novel views, while our FVR leverages volume rendering to improve the quality of the warped image. Moreover, NeRF synthesizes novel images with different views according to the rays of different directions, while our FVR renders images only with the assumed ray at the angle orthogonal to the image plane, which is specially designed for face animation in this paper. In terms of implementation, our Orthogonal Adaptive Ray-Sampling does not require a two-stage network to refine the complex lights like NeRF, but only needs one MLP to process the 3D information. In short, Orthogonal Adaptive Ray-Sampling is more suitable for face animation than hierarchical (two-stage) volume sampling.\n\n+ Then we explain the computational burden. The computational cost of the Orthogonal Adaptive Ray-Sampling and NeRF's rendering mainly differ in two aspects. Firstly, since NeRF and FNeVR have different generation objectives, the number of light rays inputted to NeRF is much more than the number of voxels inputted to our FNeVR, making FNeVR have less computational cost. Besides, FVR only introduces one MLP network, while NeRF needs to adopt two MLP networks as stated above, indicating that FVR is more concise.\n\n+ Finally, we elaborate on the \"adaptive\". The adaptive part is conducted by the MLP in the FVR module. Specifically, our FVR introduces one MLP network to directly estimate the voxel probability $p_\\sigma$ of each voxel which is the integral of the volume density $\\sigma$ within a suitable interval size $\\delta$. Therefore, for our model, the selection of the interval size $\\delta$ is actually involved in the estimation of the voxel probability $p_\\sigma$ and adaptively adjusted by the MLP. We have revised the analysis in Section A in the supplementary materials.\n\n### Q2: The warped feature $F_w$ for the final prediction.\n\n**A2:**\n\n+ We conduct a new ablation study when the wrapped feature $F_w$ is not sent into the designed decoder, and the results are shown as follows. We also add these results in Table 4 in the revised paper.\n \n | Method | $\\mathcal{L}_1$ $\\downarrow$ | LPIPS $\\downarrow$ | PSNR $\\uparrow$ | SSIM $\\uparrow$ | AKD $\\downarrow$ | AED $\\downarrow$ | FID $\\downarrow$ |\n | --------------- | ---------------------------- | ------------------ | --------------- | --------------- | ---------------- | ---------------- | ---------------- |\n | FNeVR w/o $F_w$ | 0.0460 | 0.0854 | 22.653 | 0.7423 | 1.367 | 0.0262 | 12.07 |\n \n Obviously, the overall performance by the metrics for evaluating the authenticity of the generated results is much worse, indicating that not inputting $F_w$ into the decoder will adversely affect the model's ability to transfer the facial poses and expressions, and lead to the loss of facial details in the generated results.\n\n+ Yes. For FOMM+FVR, we input the concatenated results of $F_w$ and $F_r$ into the decoder just like the full FNeVR. We conduct the ablation study of FOMM+FVR to test and verify the effectiveness of our designed decoder in the paper.\n\n+ As stated in Section 3.3 of the paper, $I_m$ is the output of the tiny decoder (just two layers) when its input is $F_r$. We design the perceptual loss $\\mathcal{L}_R$ with $I_m$ to supervise the training process.",
" ### Q3: SPADE layer.\n\n**A3:**\n\n+ Thank you very much for your valuable questions and suggestions. Firstly, we conduct a new ablation study (FOMM, FOMM + SPADE, FOMM+FVR) to test the computation and memory consumption, and the results are shown as follows:\n \n | | FLOPs(G) | Params(M) |\n | ------------ | -------- | --------- |\n | FOMM | 56.24 | 59.767 |\n | FOMM + FVR | 72.894 | 62.413 |\n | FOMM + SPADE | 86.799 | 56.199 |\n | FNeVR | 130.109 | 61.378 |\n \n As you considered, the designed decoder which introduces the SPADE layer brings more computation consumption. Your suggestion is very reasonable, but it is worth noting that our decoder is entirely designed by ourselves and different from any SPADE decoders of other works. Although our designed decoder is not the key contribution of this paper, and the most effective improvement comes from FVR, it still helps to improve the effectiveness of the generated results, according to Table 4 of the paper. Moreover, the computation and memory consumption are far less than those of Face vid2vid, as shown in Table 3 of the paper. According to your suggestions, we will explore a lightweight decoder to achieve better results in the future.\n\n### Q4: The ablation study without the loss function (4).\n\n**A4:**\n\n+ Thanks for the insight. We have added the ablation study without the loss function $L_\\sigma$ in Table 4 of the revised paper. For the convenience of review, we also give the results as follows:\n \n | Method | $\\mathcal{L}_1$ $\\downarrow$ | LPIPS $\\downarrow$ | PSNR $\\uparrow$ | SSIM $\\uparrow$ | AKD $\\downarrow$ | AED $\\downarrow$ | FID $\\downarrow$ |\n | -------------------- | ---------------------------- | ------------------ | --------------- | --------------- | ---------------- | ---------------- | ---------------- |\n | FNeVR w/o $L_\\sigma$ | 0.0412 | 0.0869 | 24.163 | 0.7705 | 1.284 | 0.0239 | 9.024 |\n | FNeVR | 0.0404 | 0.0804 | 24.292 | 0.7773 | 1.254 | 0.0231 | 8.443 |\n \n The results show that if $L_\\sigma$ is not introduced, all metrics are worse than the full FNeVR, indicating that it is necessary to introduce reliable 3D information. Meanwhile, even without the 3D information, FNeVR still performs well, demonstrating the remarkable effectiveness of the proposed framework in this paper.\n\n### Limitation1: The video results of this paper are not impressive enough.\n\n**A5:**\n\n+ It is indeed our carelessness that we only provided a few comparison results of our model with FOMM and DaGAN in the supplementary materials. To be honest, we mistakenly think that providing too many videos will increase the burden of review. Hence, we now provide more video results, including the results of Face vid2vid(S) at this anonymous [**link**](https://1drv.ms/u/s!AraiW_uJqO8vhXqLwwjPf1jzC3GF?e=eQzgMw), and we emphasize the difference between our FNeVR and other methods by slowing down the video playback and zooming in some facial details to highlight it. It can be seen that the results of FOMM are relatively blurred, while the results of DaGAN have obvious defects in many frames. The results generated by our FNeVR are basically without these problems. Moreover, the results generated by Face vid2vid(S) have unnatural expressions, especially in the eyes regions, while our results are much more natural and of higher quality. \n\n+ In addition, we provide more results of pose editing at this [**link**](https://1drv.ms/u/s!AraiW_uJqO8vhXqLwwjPf1jzC3GF?e=eQzgMw). According to our experiments, LPE can generate reliable face rotations when pitch and yaw are in $[- {20^ \\circ }, + {20^ \\circ }]$ and roll is in $[- {25^ \\circ }, + {25^ \\circ }]$.\n It is worth noting that we realize pose editing in a more lightweight way, and enable 2D keypoints-based warping methods to achieve the pose editing function, such as FOMM, DaGAN, and SAFA.\n \n All in all, the experimental results show that our model generates significantly better results with more realistic face animations. And LPE is also an innovative design. We hope the reviewer can recognize our contributions.",
" ### Limitation2: Comparing with other reenact work.\n\n**A6:**\n\n+ As you suggested, we conduct more comparisons with PIRenderer and show the results in Table 1 and Table 2 of the revised paper. We use the official code and checkpoints for PIRenderer. The results are also given here:\n \n | Method | $\\mathcal{L}_1$ $\\downarrow$ | LPIPS $\\downarrow$ | PSNR $\\uparrow$ | SSIM $\\uparrow$ | AKD $\\downarrow$ | AED $\\downarrow$ | FID $\\downarrow$ |\n | ---------- | ---------------------------- | ------------------ | --------------- | --------------- | ---------------- | ---------------- | ---------------- |\n | PIRenderer | 0.0566 | 0.1863 | 21.040 | 0.6550 | 2.186 | 0.2245 | 11.88 |\n | FNeVR | 0.0404 | 0.0804 | 24.292 | 0.7773 | 1.254 | 0.0231 | 8.443 |\n \n | Method | FID Re1 $\\downarrow$ | CSIM Re1 $\\uparrow$ | FID Re2 $\\downarrow$ | CSIM Re2 $\\uparrow$ |\n | ---------- | -------------------- | ------------------- | -------------------- | ------------------- |\n | PIRenderer | 82.23 | 0.5535 | 79.3 | 0.5093 |\n | FNeVR | 98.23 | 0.5505 | 133.9 | 0.5282 |\n \n \n Compared with FNeVR, the matrics of PIRenderer for reconstruction are worse. At the same time, regarding the reenactment metrics, only the image generation quality by PIRenderer has certain advantages, which is the reason for the better FID. However, according to its generated videos, the faces are blurred, and the quality of the expression transfer is poor, especially the details of the eyes and mouth. Moreover, PIRenderer suffers from a severe identity preservation problem. We also provide several video results at this [**link**](https://1drv.ms/u/s!AraiW_uJqO8vhXqLwwjPf1jzC3GF?e=eQzgMw). The main reason for the poor generation quality is that PIRenderer decouples the facial information for generation, which is ineffective compared to the methods of directly rendering warped features. The main reason for the failure of expression transfer is that PIRenderer uses pre-trained 3DMM reconstruction parameters to estimate the motion field, which loses more detailed information, making it difficult to deal with expression changes.\n\n### Limitation3: No ethical issues are discussed. As a matter of fact, FOMM itself has certain issues.\n\n**A7:**\n\n+ In fact, we have stated Social Impact in the supplementary materials, which is allowed by this conference.\n\n### Limitation4: Certain descriptionsare confusing.\n\n+ **\"This is the first work of neural volume rendering for face animation.\"**\n \n **A8:**\n \n + As you suggested, we have revised the statement about \"first\" in the paper. However, it is worth noting that our FVR is novel and different from other methods. The only common point is that we all employ neural rendering. The basic idea of NerFACE and ADNeRF is to decouple the basic attribute information of the face, and input the basic attributes of the face into NeRF to complete the rendering. Instead of directly applying NeRF, our model deals with the single-view input to generate the driving result of a single view. We would like to emphasize that our FVR can be applied to more generation tasks as long as appropriate 3D information is provided. For example, if we introduce reliable 3D body information to FVR, FVR can be applied to generate photo-realistic images of the body.\n\n+ **\"Model-free methods\".**\n\n **A9:**\n \n + Thanks for your concern. As you have understood, \"Model-free methods\" refer to those that do not rely on any prior knowledge of faces. We follow the existing works (such as HeadGAN and StyleHeat) to name these \"Model-free methods\". \n \n In the revised paper, we have added this definition for clarity. Meanwhile, we have replaced \"Model-free models\" with \"Model-free methods\".\n\nThank you again for your comprehensive review. We hope that this response is convincing and could address all your questions and concerns. Please let us know if you still have any other concerns or questions.\n\nBest regards, \\\nAuthors",
" We sincerely thank the editor and all reviewers for the valuable suggestions and inspiring criticisms. We believe that all comments have been carefully accommodated to the best of our knowledge. Newly added or modified texts are highlighted in blue in the revised manuscript. According to the reviewers, we have emphatically revised the motivation of the proposed FNeVR and the details of orthogonal adaptive ray-sampling, and added more ablation studies about the effectiveness of the wrapped feature $F_w$ and $L_\\sigma$. Moreover, in order to present the impressive performance of our FNeVR, we have provided more video results at this anonymous [**link**](https://1drv.ms/u/s!AraiW_uJqO8vhXqLwwjPf1jzC3GF?e=eQzgMw). \n\nFurthermore, we would like to emphasize the main contributions of this work:\n+ We present a Face Neural Volume Rendering (FNeVR) network which takes the merits of 2D motion warping on face expression transformation and 3D volume rendering on high-quality image synthesis in a unified framework. FNeVR can not only generate more realistic images than 2D-based methods, but also obtain more accurate motion transfer than 3D-based methods.\n+ We propose a Face Volume Rendering (FVR) module with orthogonal adaptive ray-sampling to capture facial details effectively and improve the animation performance. Unlike NeRF and its related works, our FVR directly processes the 3D features and efficiently generates the driving result of a single view.\n\nWe greatly appreciate the editor and all reviewers for your time in reviewing our revised manuscript and response. \n\nBest regards, \\\nAuthors",
" We sincerely appreciate your careful and thoughtful comments. In the following we address your concerns point by point. \n\n#### Q1: It is better to conduct the ablation study to investigate the influence of two hyperparameters in formula (12).\n\n+ A1:As you suggested, we utilize CSIM as the evaluation metric, which computes the cosine similarity to assess the quality of identity preservation, to provide the ablation study for the two hyperparameters of $L_{editor}$ in the following table. It shows that setting $λ_1$ to 1 and $λ_2$ to 0.5 produces the best performance. Our LPE is a separate module which does not affect the performance of the animation model. Therefore, the parameters are set empirically with the purpose of balancing the loss functions in the training process and ensuring that $L_{editor}$ does not interfere with the optimization of other loss functions.\n \n Table 1. CSIM values with different $λ_1$ and $λ_2$ in $\\mathcal{L}_{editor}$.\n \n | Formulation | $λ_1$=0.5, $λ_2$=0.3 | $λ_1$=0.75, $λ_2$=0.4 | $λ_1$=1, $λ_2$=0.5 | $λ_1$=1.25, $λ_2$=0.6 | $λ_1$=1.5, $λ_2$=0.7 |\n | ----------- |:--------------------:|:---------------------:|:------------------:|:---------------------:| -------------------- |\n | CSIM | 0.8822 | 0.8909 | **0.9051** | 0.8973 | 0.8837 |\n\n#### Q2: From the presentations (Figure 4(b)) for the cross-identity reenactment experiment, the authors only study the rendering performance when the people in source images and driving videos belong to the same gender. Do other human attributes influence the performance of FNeVR, such as gender, age, haircut, and beard?\n\n+ A2: Thanks for your kind suggestion. We have added the comparison results of cross-gender reenactment experiments in the revised paper (the last row in Figure 4(b)). More video results including cross-gender and cross-age reenactment experiments are given at this anonymous [**link**](https://1drv.ms/u/s!AraiW_uJqO8vhXqLwwjPf1jzC3GF?e=eQzgMw), and we will show more images in the supplementary materials. All of the results provided indicate that our FNeVR can achieve state-of-the-art performance in various cases. The proposed FNeVR utilizes the 2D motion estimation module to complete the warping process. It does not involve attribute decoupling, but directly performs the warping operation based on the relative motion field. Consequently, our FNeVR takes the merits of the 2D warping module and is robust to the influence of various human attributes.\n\n#### Q3: In section 4.4, the pose editor is capable of facial pose editing with a slight Euler angle. Can LPE generate reliable results when a larger Euler angle is given? Of course, the given Euler angle is reasonable for human face rotations.\n\n+ A3: Please note that we already provided a demo video in the supplementary materials to show the editing results under different Euler angles, and we also put more edited videos at this [**link**](https://1drv.ms/u/s!AraiW_uJqO8vhXqLwwjPf1jzC3GF?e=eQzgMw). According to our experiments, LPE can generate reliable face rotations when pitch and yaw are in $[- {20^ \\circ }, + {20^ \\circ }]$ and roll is in $[- {25^ \\circ }, + {25^ \\circ }]$. However, as you concerned, since there are few samples of large angle changes in the training dataset, our lightweight LPE does not handle them well with a large rotation angle, resulting in a certain degree of distortion. This is a common challenge of many existing works such as Face vid2vid and we will try to solve this problem in the future. As stated in the paper, our LPE is based on a 2D keypoint warping framework which does not require any 3D face geometric prior during the inference process, while existing works usually achieve pose editing with the help of a 3D face prior, which increases the computational cost of the network. Our LPE can be conveniently inserted into existing 2D keypoint warping frameworks. To the best of our knowledge, no face animation works based on the 2D keypoint warping framework have the pose editing function (such as FOMM, SAFA and DaGAN), and our LPE is the first one. Moreover, compared to 3D-based works, our advantage lies in using a lighter model to generate sufficiently realistic results. \n \n Finally, we would like to emphasize that the main contributions of this work are the novelty and effectiveness of the FVR module, with LPE as a complement. As stated in A2 above, our method can produce excellent performance in animating still face images in various cases. We have revised our paper to emphasize the contributions.\n\nThank you again for your review. We hope that our response helps to address all your concerns. Please let us know if you still have any other concerns or questions.\n\nBest regards, \\\nAuthors",
" \nThank you for your positive evaluations, and we sincerely appreciate your constructive and thoughtful comments. We will address your concerns point by point below.\n### Q1: Speed evaluation.\n+ A1: Thanks for your kind suggestion. We have provided the comparison results of the inference speed in terms of FPS (Frames Per Second) in Table 3 in the revised manuscript. For the convenience of review, we also give the results here, which show that our FNeVR is faster than the state-of-the-art methods except FOMM, indicating that FNeVR not only effectively improves the generation performance, but also has an outstanding efficiency.\n | Method | FPS |\n | -------------- | ------ |\n | FOMM | 61.298 |\n | Face vid2vid | 17.790 |\n | Face vid2vid-S | 13.219 |\n | DaGAN | 26.753 |\n | FNeVR | 36.568 |\n\n### Q2: $N_\\sigma$.\n+ A2: $N_\\sigma$ is equal to 16, and we have added an explanation of $N_\\sigma$ to the supplementary materials. Specifically, $N_\\sigma$ is the number of channels representing the 3D shape information of the 3D shape feature $F_\\sigma$, where $F_\\sigma$ is extracted to predict the voxel probability $p_\\sigma$. As stated in Equation (8) in the paper, the fourth dimension (channel number) of $p_\\sigma$ is equal to 1. \n### Q3: Clarify the contribution and do a comparison against single-stage uniform sampling or geometric-prior guided sampling.\n+ A3: Thanks for this insightful comment. We further describe this part in Section 1 and Section 3.3 of the revised paper and highlight the revised part in blue. It should be clarified that our method is quite different from the existing single-stage uniform sampling methods and geometric-prior guided sampling methods in terms of implementation.\\\n According to our knowledge, models which introduce the sampling similar to geometric-prior guided sampling include headNeRF, NeRFace, etc. The input data and model designing of these methods are completely different from our FNeVR. They sample the facial attribute information, including geometirc attributes, while our FNeVR directly samples the 3D related features. In contrast, face features are easier to deal with than face attributes, so our model uses a single MLP network to realize the target of face animation, and we also introduce the adaptive step for single-stage uniform sampling. In addition, these methods directly introduce NeRF into specific tasks, and our method uses the theory of volume rendering to design a novel decoder. It is worth mentioning that as long as enough reliable 3D information is provided to our FVR, it can also be used as a decoder for other generation tasks, such as 3D human body generation. Therefore, our method is novel and more general in comparison.\n### Limitation1: Major contribution of the paper FVR.\n+ A4: As stated in A3, we have made the motivation more explicit in Section 1 of the revised paper and highlighted the revised part in blue. The human face is a 3D object, so it is necessary to introduce 3D information in generation. Existing methods often pay little attention to the role of 3D rendering. In this paper, we design a FVR based on the volume rendering formula, which effectively improves the generation results and consumes less computational cost than Face vid2vid that uses 3D warping. It is worth noting that our FVR can be applied to more generation tasks as long as appropriate 3D shape and color information is provided. For example, if we introduce reliable 3D body information to FVR, FVR can be applied to generate photo-realistic images of the body.\n \n The reason why we choose the volume rendering rather than rasterization of neural texture is that rasterization of neural texture still renders features at the 2D level, while our FVR transforms 2D features into 3D features, and combines the volume rendering formula to simulate the light propagation process. Besides, compared with rasterization of neural texture, FVR also considers volume density information, which can intuitively integrate spatial prior information, so as to obtain more realistic rendering results.\n### Limitation2: Failure examples.\n+ A5: As you suggested, we have provided some failure editing examples at this anonymous [**link**](https://1drv.ms/u/s!AraiW_uJqO8vhXqLwwjPf1jzC3GF?e=eQzgMw). It indicates that our LPE can generate reliable face rotations when pitch and yaw are in $[- {20^ \\circ }, + {20^ \\circ }]$ and roll is in $[- {25^ \\circ }, + {25^ \\circ }]$. However, since there are few samples of large angle changes in the training dataset, our lightweight LPE does not handle them well with a large rotation angle. This is a common challenge of many existing works such as Face vid2vid and we will try to solve this problem in the future.\n\nThank you again for your positive comments. We hope that our rebuttal could address all your questions and concerns. Please let us know if you still have any other concerns or questions. \n\nBest regards, \\\nAuthors\n",
" This paper proposed a unified framework FNeVR for face animation, which combines 2D motion warping and 3D volume rendering. Specifically, a 3D Face Volume Rendering module is designed for facial details rendering. Moreover, a pose editor is also incorporated to change the facial pose. The experimental results on popular benchmarks verify the superiority of FNeVR. The manuscript is well written and clearly clarifies the objective and main idea in the research field. The proposed framework first applies to neural volume rendering for face animation and achieves better performance than SOTA methods. (1) It is better to conduct the ablation study to investigate the influence of two hyperparameters in formula (13). \n\n(2) From the presentations (Figure 4(b)) for the cross-identity reenactment experiment, the authors only study the rendering performance when the people in source images and driving videos belong to the same gender. Do other human attributes influence the performance of FNeVR, such as gender, age, haircut, and beard? \n\n(3) In section 4.4, the pose editor is capable of facial pose editing with a slight Euler angle. Can LPE generate reliable results when a larger Euler angle is given? Of course, the given Euler angle is reasonable for human face rotations. \n In the cross-identity reenactment experiment, more experiments could be included to fully demonstrate the validity of the proposed framework, including cross-gender and cross-age reenactment experiments.",
" This paper proposes to combine First-order motion, face vid2vid, and face volume rendering into one framework. The prior 3D knowledge from 3DMM is used to help the volume rendering procedure. The SPADE decoder from the open-sourced Face vid2vid is also used for the final prediction of the method. Experiments show that the method improves FOMM and Face vid2vid quantitatively. ++ The idea of involving a simple orthogonal volume rendering for face generation is actually interesting and reasonable.\n\n++ The way of formulating the volume rendering is nice.\n\n++ The 3D prior from 3DMM is nicely involved without interfering with the inference procedure.\n\n++ The quantitative results outperform many previous methods.\n 1. Why exactly is this paper's Orthogonal Adaptive Ray-Sampling better than the hierarchical (two-stage) volume sampling. What is the computational burden difference between the Orthogonal Adaptive Ray-Sampling and NeRF's rendering? And where is the adaptive part?\n\n2. The authors concatenate the volume rendering feature map $F_r$ and the warped feature $F_w$ for the final prediction. What is the performance if the wrapped feature is not sent into the SPADE decoder? Is $F_w$ involved in FOMM+FVR? Is $I_m$ the results of FOMM+FVR?\n\n3. The reviewer assumes that the Flops and memory consumption difference is led by the 3D landmark encoding procedure of Face vid2vid. Thus I am curious about the memory consumption of the models in the ablation studies (FOMM, FOMM + SPADE, FOMM+FVR). According to the paper, FNeVR would have a larger model size than these methods. Particularly, FOMM + SPADE is a combination of open-sourced code, which is the true baseline of the proposed method.\n\nActually, the reviewer encourages the authors to replace the SPADE decoder with a more lightweight one and adopts a variant of FOMM+FVR as the final result of this paper. As can be seen in Table 4, no obvious gain can be achieved by involving the SPADE decoder. Adding it involves more computation and makes the method less elegant.\n\n4. The ablation study without the loss function (4) is not given. To my understanding, this is very crucial in integrating 3D information.\n I have mixed feelings about this paper. Involving a simple volume rendering into FOMM like reenactment pipeline is intuitive and nice. The whole formulation, including involving 3DMM as weak prior to density estimation is not entirely novel but reasonable. Overall, I think the novelty of this paper is sufficient.\n\n-- However, the video results of this paper are not impressive enough. Only a few videos are involved, which are not sufficient for reviewers to evaluate. No obvious improvements can be seen given the video results. Even re-training FOMM again might lead to the difference. The rotation results are also of poor quality. More importantly, no video comparisons with face vid2vid (S) are given. This is the most important weakness of this paper. \n\nFor now, I am a bit leaning towards rejection, but I will have a look at the rebuttal for the final decision. Why are the results of Face vid2vid not given? The authors should prepare a nicer video for future use of all kinds.\n\n-- The lack of certain analysis and ablations as stated in the Questions part. Moreover, this paper should also consider comparing with other reenact works such as MARR and PIRenderer. \n\n-- No ethical issues are discussed. As a matter of fact, FOMM itself has certain issues.\n\n-- Certain descriptions in this paper are confusing. For example:\n\na) In the abstract the authors write \"this is the first work to formulate the neural volume rendering of face animation with a new architecture design\". To the best of the reviewer's knowledge, every work that involves neural volume rendering into face animation would somehow propose a new architecture. It is not sure what the authors are claiming.\n\nIn the introduction, the authors claim \"this is the first work of neural volume rendering for face animation\", which is inappropriate. NeRF has been applied to face animation in methods such as NerFACE and ADNeRF.\n\nb) The authors use \"Model-free methods\" to indicate methods such as First-order Motion Model. The \"Model\" here might lead to misunderstandings as the authors even write \"Model-free models\". It is actually \"human-designed intermediate structural representation\" free. \n\n-- Certain papers mentioned in the review may not be cited. Please check the references.",
" This paper proposes a Face Neural Volume Rendering network for identity-preserving and photo-realistic face animation, which unifies the 2D motion warping and 3D volume rendering. Specifically, on top of 2d motion warping, it leverages the 3d face reconstruction for shape information and proposes a face volume rendering module to enhance facial details. It shows good quality and performance on several talking-head benchmarks. - It well integrates 2d motion warping field, 3d face reconstruction and 3d volume rendering into a unified framework and shows the effect of each module with an ablation study. However, I think it would be good if the paper could provide more insights into the motivation to the design of each module. For example, the volume rendering is still typically expensive and why here it opts for volume rendering vs rasterization of neural texture (similar to Deferred Neural Rendering). \n\n- Extensive experiments and evaluations. Results are quantitatively and qualitatively good. It also provides analysis why the method is inferior to SOTA on some metrics (CSIM)\n\n- The paper is well presented. \n\n - It will be good to provide speed evaluation at inference, compared to other SOTA methods. \n\n- Is N_\\sigma equal to 1? It is a bit confusing. \n\n- There are multiple papers that use single-pass uniform sampling when volume rendering feature maps or adaptively sample based on the geometric prior (3d face reconstruction in this case). it does not seem to be a novelty as claimed by the paper. Please clarify the contribution and do a comparison against single-stage uniform sampling or geometric-prior guided sampling. \n\n - As stated above, for the major contribution of the paper FVR, I do not see a strong design motivation to it given its computational cost. \n\n- It will be good to show failure examples as well. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"A7eID1fsZrm",
"dYgWHvNcc2f",
"7YWNkfM5yx",
"7YWNkfM5yx",
"7YWNkfM5yx",
"nips_2022_7HTEHRMlxYH",
"JxgB5ywhRGi",
"1LVLhWb5sy2",
"nips_2022_7HTEHRMlxYH",
"nips_2022_7HTEHRMlxYH",
"nips_2022_7HTEHRMlxYH"
] |
nips_2022_UPnJuDKqOfX | HF-NeuS: Improved Surface Reconstruction Using High-Frequency Details | Neural rendering can be used to reconstruct implicit representations of shapes without 3D supervision. However, current neural surface reconstruction methods have difficulty learning high-frequency geometry details, so the reconstructed shapes are often over-smoothed. We develop HF-NeuS, a novel method to improve the quality of surface reconstruction in neural rendering. We follow recent work to model surfaces as signed distance functions (SDFs). First, we offer a derivation to analyze the relationship between the SDF, the volume density, the transparency function, and the weighting function used in the volume rendering equation and propose to model transparency as a transformed SDF. Second, we observe that attempting to jointly encode high-frequency and low-frequency components in a single SDF leads to unstable optimization. We propose to decompose the SDF into base and displacement functions with a coarse-to-fine strategy to increase the high-frequency details gradually. Finally, we design an adaptive optimization strategy that makes the training process focus on improving those regions near the surface where the SDFs have artifacts. Our qualitative and quantitative results show that our method can reconstruct fine-grained surface details and obtain better surface reconstruction quality than the current state of the art. Code available at https://github.com/yiqun-wang/HFS. | Accept | This paper presents a new way to build centered weights for volume rendering, utilize displacement maps, adaptive scale, as well as other techniques to provide better high frequency details in neural SDF representations.
The reviewers also acknowledged the rebuttal and the revision and the authors addressing their main concerns. | train | [
"7WY8w2NiW9H",
"Lk3rOcxdX3",
"3IAFHvvsZ8u",
"4G1_xUDrGYR",
"3bkQ2VsVdw",
"HX-AZ2xaYZ2",
"DvcW6UsNYp4",
"GiMjjGfQ6BD",
"ZqNqB67GLa",
"ZHbzhrIb6EgE",
"fGlxP_9PUj1Q",
"KTpqZ1wPyPc",
"63nZlEv0Lou",
"E_g0DJY0ys",
"KWpgoXFWOFn",
"T38rDcJnxG9",
"RyCn8cKqI6c",
"EyKHQu24ZXs",
"m1R7Bb9W26c"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. My questions have been sufficiently addressed and some of my feedback has been integrated into the revised paper.\n\nI have also read the other reviews and the authors' response to those papers, and have no further questions.\n\nI stand with my original rating and recommend accepting the paper.",
" Dear reviewer, we are very thankful for your suggestions and help in improving several parts of the paper. And if that's possible, it would be crucial for us to know if there are any concerns left on your side after our response. We would be glad to know whether there is anything else we could elaborate on to address existing or any new concerns.",
" Thank you for your response. It would be crucial for us to know if there are other concerns left on your side. We will incorporate all the changes in the final paper.",
" Thank you for your feedback and suggestions. We will incorporate all the explanations and discussions in the final version of the main paper or supplemental material.",
" Thanks for providing detailed responses to my questions!\n\nI appreciate the clarification of the differences between your approach and the IDF paper, the motivation of using positional encoding instead of two SIREN networks, and also give acknowledgement to IDF in the method part. Also, I appreciate your explanation on the ablation study and would be great if you incorporate them in the final paper (main paper or supp. mat.).\n\nFrom the rebuttal to other reviewers' and my comments, I would upgrade my scores accordingly to accept.",
" Thanks for providing detailed comments on the different concerns.\n\nIn particular, I appreciate the adequate discussion of ...\n- ... the detailed discussion of the works of VolSDF [30], NeuS [28] and Yifan et al.'s approach [32] which helps to better see the embedding of the proposed approach in the context of related work,\n- ... the improvements regarding the exposition (typos, also mentioning MipNeRF in the related work, mentioning the used Chamfer distance formula, etc.),\n- ... the promised code release,\n- ... the additional details regarding training/inference times,\n- ... the shift of limitations into the main paper,\n- ... the added visualization of local errors.",
" My questions have been sufficiently addressed in the author's response and some suggestions have been already integrated in the revision.\n\nI stand with original my rating and recommend accepting the paper.",
" We thank the reviewer for the insightful and detailed review. We respond to each question in the following.\n\n**R5-Q1. It's not very clear what exactly is novel (proposed by the paper) and what is from prior work. More effort should be put into explaining and motivating the differences.**\n\nPlease refer to ALL-Q1 and ALL-Q2.\n\n**R5-Q2. I don't see any result demonstrating the effectiveness of this hierarchical sampling strategy. Table 3: Is X+C2F adding C2F to X+H? FULL is adding adaptive transparency to IDF+H+C2F? If this is not the case, then the IDF+adaptive s is missing.**\n\nAs you discussed, X+C2F is adding C2F to X+H and FULL is adding adaptive transparency to IDF+H+C2F. We model an adaptive transparency function which is implemented by modifying the scale $s$ adaptively. And the scale $s$ is used in the volume rendering integral and hierarchical sampling. We present the qualitative results in Table 4, and the visual comparison in Fig. 11 in the additional material as also requested by other reviewers. It can be seen that this strategy can improve surface reconstruction quality and repair undesired parts.\n\n**R5-Q3. Explanation of the displacement constraint.**\n\nThanks for the explanation you presented. We found that the previous explanation did not take into account the change in $s$\nBecause $\\Psi'_s =s\\Psi_s(1-\\Psi_s)$, the $\\Psi'_s$ is small over all the space with small $s$ at the beginning, which can be interpreted as having a tight constraint at the beginning and then releasing the constraint near the surface. We changed the explanation in the supplemental material in the revision.\n\n**R5-Q4. Can I understand the adaptive transparency function described in (15) as a form of hard example mining? What do you think $\\left\\| {\\nabla {f}} \\right\\|$ would be a good indicator?**\n\nYes, the adaptive transparency function is like a form of hard example mining, which focuses on low-quality regions of the SDF in the spatial domain.\nFor the signed distance field, the norm of the gradient is expected to be 1. If the norm is greater than 1, it means that the signed distance at that location changes more drastically, which means that the change of the signed distance function is larger in a shorter distance.\nWe increase the number of sampling points in these regions to better improve the surface quality. \n\n**R5-Q5. Should it be $\\left|| {\\nabla {f}} \\right||$ instead of $\\left| {\\nabla {f}} \\right|$?**\n\nThank you for the suggestion, it is $\\left|| {\\nabla {f}} \\right||$ instead of $\\left| {\\nabla {f}} \\right|$ in the equation, we will fix the typo in the revision.",
" \n**R4-Q6. Could the authors explain their choice of the logistic sigmoid function for Psi? such as the Gaussian or Laplace distributions CDF.**\n\nThe CDF of the Gaussian distribution is not an explicit elementary function. This approach requires integration in a discrete approximation that introduces additional error.\nUsing the sigmoid function allows for more efficient derivation of density formulas and reduces numerical problems caused by division. We add a comparison in the revision. In Fig. 8, \"OUR Base-Laplace\" means using CDF of Laplace distributions for $\\Psi$. In the negative semi-axis, the derivative of the Laplace distribution CDF divided by the distribution is a constant function, the scale parameter $s$, and this will affect the quality of surface reconstruction. \n\n**R4-Q7. I suspect equation 9 has a typo, and it should be $f_d(x_b)$?**\n\nWe previously assumed that the displacement function between the basis function point and the combined surface is the same. But the problem is two displacement functions are not the same. In order to explain this more clearly, we define the displacement implicit function $f_{d'}$ to map the point $x_b$ on the base surface to the surface point $x$ along the normal $n_b$ and $f_{d}$ is used to map the point $x$ on the base surface to the surface point $x_b$ along the normal $n_b$, thus $f_{d'}(x_b) = f_d(x)$. We fixed this in the revision.\n\n**R4-Q8. The intention behind equation 15 is unclear. Could the authors please explain?**\n\nFor the signed distance field, the norm of the gradient is expected to be 1. \nIf the norm is greater than 1, it means that the signed distance at that location changes more drastically, which means that the change of signed distance function is larger in a shorter distance.\nWe propose to use the norm of the gradient of the signed distance field to weight the parameter $s$ in these short distances. We increase $s$ when the norm of the gradient along the ray direction is larger than 1 to further consider these regions to improve. Because we only need to guarantee the accurate SDF at a 0 level set to extract the surface, we need to find the gradient error near the surface, so we use normalized $\\Psi'_s$ as a weighting function. \n\n**R4-Q9. I encourage the authors to move the limitations section to the main paper in the revised version.**\n\nThanks for your suggestion, we will move the limitation section to the main paper.",
" We thank the reviewer for the insightful and detailed review. We respond to each question in the following.\n\n**R3-Q1. What are the advantages of modeling SDFs with transparency?**\n\nPlease refer to ALL-Q1.\n\n**R3-Q2. Should IDF speed up the optimization process? Any experiments show the runtime benefit of IDF?**\n\nAlthough training an IDF is easier than training an SDF, the introduction of an additional MLP results in a longer training time for a single iteration. So overall, it does not improve the runtime efficiency of the optimization process, but it can get more detailed reconstruction results. The training time of each scene is around 20 hours for 300k iterations. We will add this to the experiment section in the revision.\n\n**R3-Q3. According to both quantitative and qualitative results, the proposed approach is only marginally better.**\n\nSince the error of the high-frequency details is small, the overall Chamfer distance is not greatly improved. However, this is relative. The methods for 3D reconstruction are well refined and generally have very good results. Compared to the improvements that are typical in the field, our improvements are significant.\n\nIn addition, on the visual side, we provide a heatmap result of the local errors in Fig.12 of the revised supplemental material to show the significant improvement visually. It can be seen that we have a higher improvement in the details, such as the roof and the details in the shovel of the excavator. Furthermore, compared with the IDR[31], VolSDF and NeuS have only an improvement of less than 0.1 on the DTU dataset in their paper. Our improvement is also 0.1. \n\n**R3-Q4. How much benefit can you obtain (without the displacement field) for NeuS?**\n\nPlease refer to common comment ALL-Q1. Note that we also provide a comparison with NeuS and VolSDF in Fig. 8 of the supplementary material. Our \"OUR Base-Sigmoid\" result is our method without the displacement field, which is better than the competitors. \n\n**R3-Q5. Not enough acknowledgment of the IDF paper in Section 3.2. For example, $x_b=x-f_d(x_b)n_b$ the key idea of implicit displacement field mentioned already in IDF.**\n\nPlease refer to common comment ALL-Q2. Note that we have cited [32] and discussed it in Section 3.2 now.\n\n**R3-Q6. Why not do the same as in the IDF paper where they use two SIRENs for the base model and displacement model but using different frequencies?**\n\nPlease also refer to common comment ALL-Q2. Note that we provide an experiment to compare our method and the method used in [32] in Fig. 8 of the revised supplementary material to show the benefits of increasing frequency gradually, where \"OURS-Siren\" is the method used in [32].\n\n**R3-Q7. Do you calculate the PSNR on the training views or the held-out testing views?**\n\nAll PSNR results are on the training views.\n\n**R3-Q8. Why Base+H is much worse than only Base? Why changing from only base to IDF does not guarantee better results (DTU experiment)? And it seems to me that the most beneficial components are coming from the coarse-to-fine strategy and the locally-adaptive scales.**\n\nThe improvement of the geometric fidelity using high-frequency details is challenging, not only because the high-frequency error is small, but also because the high-frequency noise will affect the further improvement of the fidelity. We provide a visualization of the ablation as other reviewer requested in Fig. 11 of the revised supplementary material. We show that directly using Base+H will destroy the reconstruction of geometry, even though the image quality is great. The IDF+H can alleviate some of the high-frequency noise, but the result has a large geometric error in some regions. We propose the method of gradually adding frequency to the learning processing, which can reconstruct a fine surface while suppressing noise.\n\n**R3-Q9. Limitations and societal impact are briefly discussed in the conclusion.**\n\nPlease refer to the common comment ALL-Q3. ",
" We thank the reviewer for the insightful and detailed review. We respond to each question in the following.\n\n**R4-Q1. No rendering results showing ablation study and comparisons and discussing each component.**\n\nAs requested, we show the qualitative results of each component in Fig. 11 of the revised supplementary material. We observed that learning high-frequency details (Base+H) using only image information is difficult, and the network may overfit the image without reconstructing the correct geometry. \nLearning high-frequency details using IDF (IDF+H) will alleviate the noise but still produce large geometric errors. We provide a solution that allows more fine-grained control over frequency. By using a coarse-to-fine positional encoding, the frequency can be explicitly controlled by a coarse-to-fine strategy. This approach is more stable for the case without 3D supervision. Compared with only using the coarse-to-fine strategy (Base+C2F), IDF using C2F of positional encoding (IDF+C2F) has the ability to further improve the geometric fidelity with the PSNR increasing. Using an adaptive strategy(FULL) can further improve geometric fidelity while improving PSNR. \nWe also provide the qualitative results (synthetic image and surface) in Fig. 8 of the supplementary material and show better surface reconstruction results compared with other methods.\n\n**R4-Q2. Because this decomposition is not unique, and since high frequencies encoding adds noise, more possible wrong reconstructions exist that overfit the rendering. I believe that a visualization of the decomposition would emphasize its benefits.**\n\nWe provide a visualization in Fig. 10 in the revised supplementary material as requested. As can be seen from the figure, the base SDF can reconstruct a smooth model of the Buddha. The displacement function is used to add extra details like cracks in the Buddha model and some small holes in the forehead.\n\n**R4-Q3. The difference between the presented VolSDF baseline results and other resources like RefNeRF.**\n\nWe actually adopt the official VolSDF code. Our quantitative results are consistent with previous work, e.g. the VolSDF paper. However, the method is heuristic and rerunning the method many times and manually selecting the best version will give better results. For example, we noticed that by running the VolSDF multiple times, one can achieve better results for some of the models, specifically for the DTU house and ficus model. However, this gives an advantage to VolSDF as this process cannot be done automatically. We nevertheless updated the paper with the improved versions of these two models.\nFor the RefNeRF, they do not provide results of reconstructed surfaces but normal maps. The normal map is also weighted by volume rendering and does not fully represent the level set. In our experiments, the normal map looks correct and does not guarantee the correct reconstruction of the surface by marching cubes, especially when the normal map has noise in the details.\n\n**R4-Q4. Discretization formulas and multiple surface intersections.**\n\nFor the discretization, one can bring Eq.5 and Eq.6 into Eq.7, and take advantage of the properties of the derivative of the logistic sigmoid function $\\Psi'_s =s\\Psi_s(1-\\Psi_s)$. We can get the $\\sigma$ formula for the discretization computation:\n\\begin{equation}\n\\sigma ({\\bf{r}}(t_i)) = s\\left(\\Psi\\left( {f\\left( {{\\bf{r}}(t_i)} \\right)} \\right) -1 \\right)\\nabla f\\left( {{\\bf{r}}(t_i)} \\right) \\cdot \\bf{d}\n\\end{equation}\n\nThen the volume rendering integral can be approximated using $\\alpha$-composition, where $\\alpha_i = 1 - exp \\left(-{\\sigma_i} \\left({t_{i+1}} - {t_i}\\right)\\right)$. For multiple surface intersections, we follow the same strategy as NeuS, where $\\alpha_i = clamp\\left( {\\alpha_i,0,1} \\right)$. \n\n**R4-Q5. It is not certain how the displacement constraint is imposed, nor its relation to equation 13**\n\nWe found that equation 13 of the original paper has a typo. We revised it with the following form. The displacement constraint is simply used to multiply the implicit displacement.\n\\begin{equation}\nf({\\bf{x}}) = {MLP_b}({\\gamma }({\\bf{x,\\alpha}}_b) - 4\\Psi{'_s}(f_b) {MLP_d}\\left( {{\\gamma}({\\bf{x,\\alpha}}_d)} \\right){\\bf{n}}),\n\\end{equation}\n",
" We thank the reviewer for the insightful and detailed review. We respond to each question in the following.\n\n**R2-Q1. What is the Novelty for IDF compared with [15, 32] and SDF calculation compared with [28] and [30]?**\n\nPlease refer to the answers in ALL-Q1 and ALL-Q2.\n\n**R2-Q2. Typos.** Thanks for pointing out the typos. We have fixed them in the revision.\n\n**R2-Q3. Some selections of reference are arbitrary and some reference is missing or arxiv version.**\n\nThanks for the suggestion. Our related work focuses on the recent neural implicit surface reconstruction. We will add the related voxel-based 3D reconstruction references as suggestions and improve the writing to extend the NeRF part to the revision. We also have used the published format instead of arxiv version in the revision if available.\n\n**R2-Q4. Why has MipNeRF only been mentioned in the list in the introduction, but not in the discussion of related work or among the competing techniques used for evaluation?**\n\nThanks for the suggestion, we will discuss MipNeRF in the detail reconstruction section and expand the recent work of NeRF in related work. Both MipNeRF and NeRF focus on density reconstruction and cannot guarantee to produce watertight surfaces like SDF. We could try to evaluate the MipNeRF, but it is generally assumed that pure image-based NeRFs cannot compete with NeRFs that have a specific surface regularizer when it comes to surface reconstruction. We would assume that MipNeRF is better in terms of PSNR, but worse in terms of surface construction. The main issue is that extracting surfaces from MipNeRF needs extra code in Jax and we currently use PyTorch.\n\n**R2-Q5. Time of training and inference.**\n\nThe training time of each scene is around 20 hours for 300k iterations. The inference time for extracting a mesh surface with high resolution (512 grid resolution for marching cubes) is around 60 seconds and rendering an image at the resolution of 1600x1200 is around 540 seconds. We will add this to the experiment section in the revision. This setting of resolution is similar to other approaches for surface reconstruction (i.e. NeuS).\n\n**R2-Q6. An ablation study of the effect of the regularizers would be interesting as well.**\n\nWe provide an ablation study of Eikonal regularization in Fig.8 in the revision of the supplemental material. We observe that training without the regularization of the base SDF (\"OUR-w/o Base Reg\") results in slightly worse reconstruction quality. Thus constraining the base SDF can help improve the quality of the reconstruction.\n\n**R2-Q7. The stability of the reconstructions of fine-grained structural details could be shown with respect to the number of views.**\n\nThanks for the suggestion. We conduct an experiment for surface reconstruction with 10\\% of the training image in Fig.9. We find that our method can keep the structure of reconstructed objects complete compared to NeuS, and can better reconstruct parts such as thin stripes with fewer training images. PSNR of NeuS and OURS is 28.31 and 31.77 respectively, which is also improved.\n\n**R2-Q8. Limitations should be discussed in the main paper and the visualizations should be provided for bad cases.**\n\nThanks for the suggestion. Please also refer to ALL-Q3. We extend the limitation and visualize a bad case of Table 1 where the error is larger than that of the other methods. In this case, the lighting of this model varies and the texture is not as pronounced, thus it is difficult to reconstruct the details of the belly.\n\n**R2-Q9. Will code be released upon paper acceptance?**\n\nWe will release the code upon paper acceptance.\n\n**R2-Q10. The equation of the Chamfer distance.**\n\nWe added the formula of Chamfer distance in the supplemental material B.2 in the revision. \n\n**R2-Q11. Showing the local errors on the surface with respect to the ground truth to highlight where deviations are larger.**\n\nThanks for your suggestion. We added a visualization for local errors in Fig.12 of the revised supplemental material to better highlight the local errors. It can be seen that we have a higher improvement in the details, such as the roof and the details in the shovel of the excavator. ",
" We thank the reviewer for the insightful and detailed review. We respond to each question in the following.\n\n**R1-Q1. Why directly modeling the transparency function would be better?** \n\nCompared to VolSDF, since the transparency function is explicit, our method can use an inverse distribution sampling computed with the inverse CDF to satisfy the approximation quality. Thus no complex sampling scheme as in VolSDF is required. Compared with NeuS, we obtain a simpler formula for the density $\\sigma$ for discretization computation, reducing the numerical problems caused by division using in the NeuS. Our approach does not need to involve two different sampling points, namely section points and mid-points, where the color and the geometry are more consistent. Please refer to ALL-Q1 that we discuss this with theoretical explanation in the common comment ALL-Q1.\n\n\n**R1-Q2. No ablations showing the difference of performance compared with VolSDF/NeuS.**\n\nWe provide the experiment as requested in Fig. 8 \"OUR Base-Sigmoid\". Due to the easy approximation, no numerical problems due to division and no need to sample section points and mid-points separately, the result shows better geometry consistency and quality.\n\n**R1-Q3. Why does the use of the work[32] is not cited anywhere except in related work.**\n\nThanks for the suggestion of citation. We also discussed this in the common comments. Please see the answer in ALL-Q2.",
" We thank all the reviewers for the thorough and constructive reviews. In the following we first address the common concerns.\n\n**ALL-Q1. Can you provide more details about the advantages of the proposed modeling of the transparency function compared with VolSDf and NeuS?**\n\nCompared to VolSDF, since the transparency function is explicit, our method can use an inverse distribution sampling computed with the inverse CDF to satisfy the approximation quality. Thus no complex sampling scheme as in VolSDF is required.\n\n\nCompared with NeuS, we obtain a simpler formula for the density $\\sigma$ for the discretization computation, reducing the numerical problems caused by division in NeuS. Bringing Eq.5 and Eq.6 into Eq.7, we get the $\\sigma$ formula for the discretization. \n\\begin{equation}\n\\sigma ({\\bf{r}}(t_i)) = s\\left(\\Psi\\left( {f\\left( {{\\bf{r}}(t_i)} \\right)} \\right) -1 \\right)\\nabla f\\left( {{\\bf{r}}(t_i)} \\right) \\cdot \\bf{d}\n\\end{equation}\nThen the volume rendering integral can be approximated using $\\alpha$-composition, where $\\alpha_i = 1 - exp \\left(-{\\sigma_i} \\left({t_{i+1}} - {t_i}\\right)\\right)$.\nFurthermore, our approach does not need to use two different sampling points, namely section points and mid-points used in NeuS, which makes it easier to satisfy the unbiased weighting function. Since there is no need to calculate the SDF and the color separately for the two different point sets, the color and the geometry are more consistent compared to NeuS. We will add this discussion to the main text in the revision. In Fig. 8 of the supplementary material, we also provide a qualitative comparison to Volsdf and NeuS requested by the reviewers. For example, our \"Base-Sigmoid\" result shows better geometry consistency on the roof of the house compared with VolSDF and NeuS.\n\n**ALL-Q2. What exactly is novel compared with [32]? Why is [32] only discussed in the related work?**\n\nThanks for the suggestion of citing [32] in Section 3.2. We have cited [32] and discussed in Section 3.2 now.\n\nWe would like to note some differences to [32]. We use positional encoding instead of Siren, so that the frequency can be explicitly controlled by a coarse-to-fine strategy better than simply using two Siren networks with two different frequency levels. This is very useful when 3D supervision is not given. We provide results in Fig.8 of the supplementary material to answer the difference raised by the reviewers. \nWe observed that the IDF using Sirens (\"OURS-Siren\" in Fig.8) used in [32] can obtain a high PSNR result but low geometry fidelity. Although [32] also use a coarse-to-fine strategy between two frequency levels, we found that the method still has the problems when learning high-frequency details because of the high-frequency noise involved at the beginning. Our IDF using positional encoding does not use high-frequency information at the beginning of training, which makes the training more stable. In general, we provide a solution that allows more fine-grained control over frequency. This approach is more stable for the case without 3D supervision. \n\n\n**ALL-Q3. There is no detailed discussion of limitation in the main paper rather than in the supplementary material.**\n\nThanks for the suggestion. We will move some of the limitation part to the main text of the paper.",
" The paper tackles the task of surface reconstruction from images. They use the volume rendering based approach popularized by NeRF, and thus are also interested in maintaining high novel view synthesis performance. In particular they follow recent work (IDF, VolSDF) that embed an SDF into the volume rendering pipeline in order to get more accurate surfaces, and propose an approach for achieving high frequency detail. To do this, they incorporate the SDF in a new fashion, use implicit displacement fields [32] and use a coarse to fine strategy. Strengths\n- Good explanation of previous ways that SDFs have been incorporated into volume rendering\n- Good explanation for the proposed way to incorporate SDFs and the intuition behind it.\n- Experimented on well established datasets/benchmarks and shows significant improvements on surface reconstruction while maintaining similar or better view synthesis quality\n- Good ablation for the high frequency detail methods, e.g. basic encoding vs displacement field method, and the benefit of the coarse to fine approach.\n\nWeaknesses\n- No theoretical explanation of the benefits of the new SDF incorporation method compared to previous methods, except for simplicity of the derivation. You have done an excellent job motivating your approach mathematically, but haven't clearly explained what benefits that has over the other methods. You should explain in more detail why directly modelling the transparency function would be better.\n- No ablations showing how the new way to incorporate SDFs differs in performance to the way VolSDF/NeuS does it. In section 3.2 the work describing the decomposition into a base and displacement field is not cited at all (should be [32]). Given that it is a major part of the approach and the use of that work is not cited anywhere except in related work, this is quite important to do. Both limitations and potential negative societal impacts have been discussed in the conclusion.",
" The authors address the objective of improving the accuracy of 3D scene reconstruction based on neural rendering approaches. For this purpose, they first provide a theoretical discussion on how the signed distance function (SDF) can be embedded into volume rendering as well as the relationship between the SDF, the transparency function, the volume density and the weighting function. In addition, criteria for functions that are suitable to map signed distances to transparency and a respectively suitable class of functions is discussed, but the choices follow previous work (in particular, the integration of the SDF follows VolSDF, and the selection of a logistic sigmoid function with learnable scale parameter for Psi follows NeuS). This is used in a multi-scale fitting framework where the SDF is modeled as a combination of a base distance function and a (implicit) displacement function (similar to references [reference 15] and [reference 32]) along the normal of the base distance function, and positional encoding is used for each of these functions separately, where the frequency of the positional encoding is increased according to the coarse-to-fine strategy by Park et al. [reference 22]. Both functions are additionally represented by separate MLPs, and the used loss includes a radiance term and an additional Eikonol regularization of the SDF, where the norm of the gradient of the SDF is used to weight the scale parameter in a spatially varying manner.\n\nExperiments on the DTU Dataset, NeRF Synthetic Dataset and BlendedMVS dataset with comparisons to NeuS and VolSDF show the potential of the proposed approach. ## Originality/novelty and technical soundness:\npro:\n+ The approach seems reasonable.\n+ The major novelty seems combining these concepts in the NeRF framework and using a spatially varying scale parameter which, together with the combination of all the concepts, leads to accuracy improvements over the other approaches (NeuS, VolSDF) in the comparison. \n\ncon:\n- However, there is a larger overlap with existing work regarding the use of separate networks to represent a base SDF and displacement fields [references 15 and 32], the integration of the SDF into volume rendering [reference 30], using an unbiased density function using logistic sigmoids and learnable control parameter [reference 28]).\n\n\n\n## Clarity and exposition:\npro:\n+ The paper is well-structured and easy to follow. Arguments are clear.\n+ Figures/Tables and the respective captions are informative.\n+ The relevance of the topic is sufficiently motivated.\n+ The supplemental provides further implementation details in more detail.\n\ncon:\n- While text quality is mostly ok, there are typos, e.g. 'view-dependent' appears several times as 'view dependent', 'low/high frequency' should be written as 'low/high-frequency' if it is followed by a noun it refers to (e.g., 'components', 'content', 'surfaces', etc.), 'state of the art algorithm' should read 'state-of-the-art algorithm', 'coarse to fine optimization' should read 'coarse-to-fine optimization'.\n\n- Some selections of reference lists seem arbitrary, e.g., the reference lists on NeRF improvements (in the introduction) and multi-view stereo reconstruction (in the related work section). It is not clear why only these references have been used. For NeRF improvements, it might be helpful to cluster references regarding the aspect the techniques are focused on, e.g. faster training/inference, less constrained conditions, video inputs, generalization capabilities, multi-scale representations, etc.. Regarding voxel-based 3D reconstruction approaches there are many more variants, e.g.:\n- - Izadi et al., KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera, and its extensions such as:\n- - Niessner et al., Real-time 3D reconstruction at scale using voxel hashing\n- - Whelan et al., Kintinuous: Spatially extended kinectfusion\n- - Dai et al., Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration\n- - Klingensmith et al., Chisel: Real Time Large Scale 3D Reconstruction Onboard a Mobile Device using Spatially Hashed Signed Distance Fields\n- - Stotko et al., Efficient 3D Reconstruction and Streaming for Group-Scale Multi-Client Live Telepresence\n\nIn addition, there are more variants for surface meshing such as:\n- - Fuhrmann and Goesele, Floating Scale Surface Reconstruction\n- - Calakli and Taubin, SSD: Smooth signed distance surface reconstruction\n- - Berger et al., A survey of surface reconstruction from point clouds\n\n\n\n## Evaluation:\npro:\n+ The authors provide qualitative and quantitative results with comparisons to reasonable baselines (VolSDF and NeuS), where the proposed approach is demonstrated to lead to improvements in the reconstruction result.\n+ An ablation study shows the effects of high-frequency positional encoding, coarse-to-fine optimization and the use of the implicit displacement function.\n+ The supplemental provides further ablations on the coarse-to-fine parameter alpha_d and the number of frequency bands L as well as ablations for adaptive transparency.\n\ncon:\n- The MipNeRF approach that focuses on scene representation at continuously-valued scale also improves the representation of fine-grained details. It is not clear why this approach has only been mentioned in the list in the introduction, but not in the discussion of related work or among the competing techniques used for evaluation.\n- Training and inference times could be mentioned.\n- An ablation study of the effect of the regularizers would be interesting as well.\n- Additionally, the stability of the reconstructions of fine-grained structural details could be shown with respect to the number of views. Maybe the proposed approach gets the same quality of the reconstruction of NeRF with significant less input views.\n- There is no detailed discussion of problematic cases or failure cases in the main paper. Limitations regarding inaccurate reconstructions of linear structures (i.e. ropes) mentioned in the supplemental should rather be mentioned in the main paper. In addition, e.g., for the cases where the proposed approach does not outperform its competitors in Table 1, respective visualizations could be provided to demonstrate the kind of errors and where these are larger than for the other methods. This would provide insights on further improvement potential.\n\n\n\n## Reproducibility:\npro:\n+ The information in the paper together with the attached code seems sufficient to re-implement the approach.\n\ncon:\n- It is not entirely clarified, whether the code will also be made available afterwards.\n\n\n\n## References:\ncon:\n- See comment under 'Exposition'.\n- For several of the references, only the arxiv reference is provided and not the final conference/journal reference. This should be revised accordingly.\n\n\n\n## Post-rebuttal recommendation:\nThe paper addressed my major concerns in terms of ...\n... the detailed discussion of the works of VolSDF [30], NeuS [28] and Yifan et al.'s approach [32] which helps to better see the embedding of the proposed approach in the context of related work,\n... the improvements regarding the exposition (typos, also mentioning MipNeRF in the related work, mentioning the used Chamfer distance formula, etc.),\n... the promised code release,\n... the additional details regarding training/inference times,\n... the shift of limitations into the main paper,\n... the added visualization of local errors.\nAs a result, I increase my rating from borderline accept to weak accept. ## Exposition:\n- As mentioned above under 'strengths/weaknesses', some selections of reference lists seem arbitrary. For NeRF improvements, it might be helpful to cluster references regarding the aspect the techniques are focused on, e.g. faster training/inference, less constrained conditions, video inputs, generalization capabilities, multi-scale representations, etc.. The lists of papers for 3D reconstruction and surface reconstruction also seem arbitrary in the current formulation. Referring to the respective works or respective surveys could solve this.\n\n## Evaluation:\n- I would recommend to also provide the equation for the Chamfer distance, as there are different definitions used in literature.\n- Regarding Figures 1 and 3, showing the local errors on the surface with respect to the ground truth might additionally better highlight where deviations are larger.\n- As mentioned above, the MipNeRF approach focuses on scene representation at continuously-valued scale which improves the representation of fine-grained details. Why has this approach has only been mentioned in the list in the introduction, but not in the discussion of related work or among the competing techniques used for evaluation.\n- As mentioned above, training and inference times could be mentioned.\n- As mentioned above, an ablation study of the effect of the regularizers would be interesting as well.\n- As mentioned above, the stability of the reconstructions of fine-grained structural details could be shown with respect to the number of views. Maybe the proposed approach gets the same quality of the reconstruction of NeRF with significant less input views.\n- As mentioned above, there is no detailed discussion of problematic cases or failure cases in the main paper. Limitations regarding inaccurate reconstructions of linear structures (i.e. ropes) mentioned in the supplemental should rather be mentioned in the main paper. In addition, e.g., for the cases where the proposed approach does not outperform its competitors in Table 1, respective visualizations could be provided to demonstrate the kind of errors and where these are larger than for the other methods. This would provide insights on further improvement potential.\n- A discussion or comparison to the recent approaches Instant-NGP (Muller et al., Instant neural graphics primitives with a multiresolution hash encoding) and SVLF (Tremblay et al., RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis) would be interesting, however, it seems as if these works were not yet published at the NeuRIPS submission deadline.\n\n## Reproducibility:\n- Will code be release upon paper acceptance? The authors have adequately addressed the potential negative societal impact of their work. Limitations in terms of the need for optimizing an additional implicit function or overfitting have only been shortly discussed in the last section on conclusions. As mentioned above, the discussion of limitations needs to be improved.",
" \n This paper builds on the recent efforts in multi-view neural implicit surface reconstruction and improves in the reconstruction quality by introducing a different way of modelling SDFs, applying the implicit displacement fields to this task, and also proposing locally-adaptive scaling factors. They show decent improvements on multiple object-level dataset for both surface reconstruction and view synthesis.\n *Strengths*\n - I like the idea of locally-adaptive scaling factor s, which makes a lot of sense and I think can be a general add-on for other methods.\n - The paper is well written overall, easy to follow.\n \n *Weaknesses*\n- In 3.1, what is the motivation of modelling SDFs with transparency? I don't see the clear advantage of doing so compared to modelling with the density function as done in VolSDF, or weighting function in NeuS. Indeed, modelling with the transparency is indeed conceptually simper than NeuS, but fundamentally it is very similar to the combination of NeuS and VolSDF.\n- I don't think the method has fully utilize the benefit of implicit displacement field. For example, intuitively speaking, using a base model to model the coarse shape and another network to model the details should speed up the optimization process a lot, since the coarse shape can be very quickly obtained, and learning displacements is an easier optimization task than learning fully SDFs. Any experiments show the runtime benefit? \n- According to both quantitative and qualitative results, even after adding 3 different contributions and course-to-fine strategies, the proposed approach is only marginally better. I am not fully convinced by the usefulness of your different contributions. Moreover, because of the use of an extra MLPs, you even need more computational resources.\n \n \n - I wonder if you simply change the way of modelling SDFs of NeuS to yours, how much benefit can you obtain (without the displacement field) for NeuS?\n \n \n - No enough acknowledgement to the IDF paper in 3.2. For example, $\\mathbf{x}_b = \\mathbf{x} - f_d(\\mathbf{x}_b)\\mathbf{n}_b$ (the key idea of implicit displacement field) mentioned already in IDF. \n \n - Why not do the same as in the IDF paper where they use two SIRENs for the base model and displacement model but using different frequencies?\n \n \n - PSNR calculation. Do you calculate the PSNR on the training views or the held-out testing views?\n \n - Ablation study. Table 3 confuses me since it lacks of discussion of different componenets. Why Base+H is much worse than only Base? Why changing from only base to IDF does not gurantee better results (DTU experiment)? And it seems to me that most beneficial components are coming from the coarse-to-fine strategy and the locally-adaptive scales.\n \n Limitations and societal impact are briefly discussed in the conclusion. ",
" The paper present s new method for surface reconstruction using the recently popular neural volume rendering approach. The authors suggest improving existing methods, which often lack high-frequency details. To do so, the authors present various changes. First, they propose transforming a learned sign distance function into the transparency function, rather than the physical density or the weighting function as done in relevant baselines. Additionally, following previous works, the authors decompose the sign distance function into base and displacement functions, accompanied by a gradually increasing frequency learning strategy. Lastly, they propose improving certain regions by weighing based on the properties of the sign distance function. The paper presents reconstruction results compared to existing baselines at three different datasets. Strengths:\n\n1. The need for high-frequency details improvement is well motivated and understood.\n2. I find some of the suggested improvements innovative and contributive to future works. More specifically, I like the idea of transforming the transparency and examining its derivate and relation to the weighting function, and the idea of the adaptive weighting strategy. I mostly appreciate the concept of weighting based on the eikonal equation satisfaction.\n3. The related work section is concrete, well written, and covers the paper area for newcomers.\n\nWeaknesses:\n\n1. The contribution of each component is not straightforward. Although the authors conducted an ablation study, they have not presented qualitative results or discussed the potential implications of each component for the overall improvement. \n2. Following but separately from the above, I’m concerned with the need for the base and displacement decomposition. Because this decomposition is not unique, and since high frequencies encoding adds noise, more possible wrongly reconstructions exist that overfit the rendering. I believe that a visualization of the decomposition would emphasize its benefits. Another alternative is to show an ablation using the decomposition (“IDF”) with frequencies as the baseline NeuS.\n3. There is a clear difference between the presented VolSDF baseline results shown in this paper and other resources. I direct the authors to the original VolSDF paper for results of the DTU and BlendedMVS datasets (figures 5 and 10), and to the recent work RefNeRF (https://dorverbin.github.io/refnerf/) for the synthetic NeRF results (see video). I would appreciate it if the authors could explain this difference.\n4. There are some missing methodology implementation details. For example, it is unclear what are the discretization formulas for the transparency and the volume rendering integral approximations, given a set of samples along the ray. Also, I believe the authors should state how multiple surface intersections are handled (line 173). Additionally, it is not certain how the displacement constraint is imposed, nor its relation to equation 13.\n5. The superiority of the presented methods is mostly given quantitively but not qualitatively. It is also apparent in the PSNR results, surpassing all other methods, but the paper does not show rendering comparisons. Similar to the ablation study, I believe that additional qualitative results (even in the supplementary) would help the paper statements. Except for the mentioned concerns raised in the previous section, I would appreciate it if the authors would answer the following questions:\n\n1. Could the authors explain their choice of the logistic sigmoid function for Psi?\n2. (Relates to the above) Have the authors considered using the Gaussian or Laplace distributions CDF as the transformation function Psi? Why would one choice will be better than the other?\n3. I suspect equation 9 has a typo, and it should be f_d(x_b)? If not, please explain the transition from equation 8.\n4. The intention behind equation 15 is unclear. Could the authors please explain? The authors addressed the method limitations. However, I encourage the authors to move the limitations section to the main paper in the revised version.",
" The paper improves the detail representation of geometric details for implicit surface reconstruction using multiview images. \nIt contains 3 contributions: \n\n1. an analysis of the SDF and values in the volumetric rendering, which leads to formally deriving the suitable transformation function to use SDF for volumetric rendering.\n2. an improved implicit displacement field for better detail reconstruction, which contains several changes to the original implicit displacement field to robustly work with 2D supervision\n3. a strategy to improve the optimization by adapting the mapping function in (1) spatially. # Strengths:\nThis paper is very well written and shows impressive improvements compared to the state-of-the-art for reconstructing geometric details.\nThe three steps are well motivated and the ablation shows that they all contribute to the improvement. The entirety of the paper provides a good basis for NeRF-based multi-view 3D reconstruction.\n\n# Weakness:\nI think while this paper was very clear in stating the 3 contributions (section 3.1-section 3.3), it's not very clear what exactly is novel (proposed by the paper) and what is from prior work. This is particularly true in section 3.2. L175-201 seem to be purely prior work [32], which should be clarified and condensed. More effort should be put into explaining and motivating the differences.\n\nThere is a small section in the supplemental (B.2) explaining hierarchical sampling based on $cs$. But I don't see any result demonstrating the effectiveness of this sampling strategy. 1. Regarding the displacement constrain $\\Psi'_s$, in the supplemental, the authors mentioned using some progressive strategy to gradually increase the constraint. If I understand correctly, this means gradually requiring the displacement to be spatially sparse and more focused on the surface area. The motivation is not very well explained. Shouldn't I want the opposite - by requiring a tighter spatial bound around the base surface at the beginning of the training, the base SDF can quickly converge, so that when optimizing the displacement $f_d$, $f_b$ remains stable and hopefully the composed function $f$ converges faster. Why is this not the case?\n2. Can I understand the adaptive transparency function described in (15) as a form of hard example mining? What do you think $\\|\\nabla f\\|$ would be a good indicator? BTW, should it be $\\\\|\\nabla f\\\\|$ instead of $\\|\\nabla f\\|$?\n3. Table 3: Is X+C2F adding C2F to X+H? FULL is adding adaptive transparency to IDF+H+C2F? If this is not the case, then the IDF+adaptive s is missing\n Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4,
5
] | [
"63nZlEv0Lou",
"EyKHQu24ZXs",
"HX-AZ2xaYZ2",
"3bkQ2VsVdw",
"RyCn8cKqI6c",
"KTpqZ1wPyPc",
"m1R7Bb9W26c",
"m1R7Bb9W26c",
"fGlxP_9PUj1Q",
"RyCn8cKqI6c",
"EyKHQu24ZXs",
"T38rDcJnxG9",
"KWpgoXFWOFn",
"nips_2022_UPnJuDKqOfX",
"nips_2022_UPnJuDKqOfX",
"nips_2022_UPnJuDKqOfX",
"nips_2022_UPnJuDKqOfX",
"nips_2022_UPnJuDKqOfX",
"nips_2022_UPnJuDKqOfX"
] |
nips_2022_jzd2bE5MxW | TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels | State-of-the-art federated learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions. For neural networks, even when centralized SGD easily finds a solution that is simultaneously performant for all clients, current federated optimization methods fail to converge to a comparable solution. We show that this performance disparity can largely be attributed to optimization challenges presented by nonconvexity. Specifically, we find that the early layers of the network do learn useful features, but the final layers fail to make use of them. That is, federated optimization applied to this non-convex problem distorts the learning of the final layers. Leveraging this observation, we propose a Train-Convexify-Train (TCT) procedure to sidestep this issue: first, learn features using off-the-shelf methods (e.g., FedAvg); then, optimize a convexified problem obtained from the network's empirical neural tangent kernel approximation. Our technique yields accuracy improvements of up to $+36\%$ on FMNIST and $+37\%$ on CIFAR10 when clients have dissimilar data. | Accept | This paper introduced a novel two-stage scheme to combine the feature learning capacity of neural network and for efficient optimization linear model. It makes interesting empirical observations that are relevant to NeurIPS and may inspire future work. In particular, the observation that FedAvg learns useful features even in data heterogeneous settings is novel and helps to explain the empirical success of FedAvg plus fine-tuning. The experimental evaluation is mostly thorough, with multiple datasets tested and helpful ablations.
| train | [
"a3eXfmUSBa",
"FuTOBxt25KI",
"P7wEXchyNyl",
"krFURGWt-7T",
"_aSB9T3vNIN",
"ec-BBoYgU6o",
"W4jZcYLgJ91",
"w4DMfezoljM",
"xl_VJUaO0HF",
"_DgChOC4msz",
"pP1YH-nxZ-",
"pIJkqnVIevA",
"13fPyaoYajH",
"_vC1nFDhB8d",
"CoYcAC3Ml3y",
"YRDX79Rnge"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Authors response and new experiments have resolved my concerns. Thus I will raise my score. ",
" I would like to thank the authors for their extensive response that cleared a lot of my concerns. I would also encourage the authors to provide the discussion around the sources of non-iidness in the main text as well. Based on the above, I will raise my score.",
" We thank you for your feedback!\n\nRegarding question **Q3**: Apologies for misunderstanding the previous question. Fig 1 uses CIFAR10 whereas Fig 2 uses CIFAR100. The latter is more challenging and is known to have a bigger generalization gap even in the centralized setting. In all cases, the generalization gap in our federated setting closely matches those seen in the centralized setting.\n\nIn the centralized setting, ResNet on CIFAR100 has a ~25% generalization gap between train and test accuracy, whereas on CIFAR10 it is ~8%. Similar generalization gaps are observed in our federated experiments as well, except that both the train and the test accuracies are shifted down due to poor optimization.\n\n*Since the gap is not significantly larger in the federated setting, we can conclude the added difficulty is not due to generalization but due to optimization.* We will further revise lines 118-120 in our submission to clarify the dataset setup.\n\nThank you again for your feedback. We hope this clarifies the question. Please let us know if you would like any more details about this.",
" I thank the authors for their response, which address most of my concerns.\n\nThe only remaining question is Q3.\nAuthors state that \"FedAvg and SCAFFOLD show a larger gap only in the easy nearly-iid setting\". However, my original question is regarding to gap between train and test curve.\n\nIn Figure 1 (b), the curve `FedAvg-Train ($\\alpha=0.1$)` is very close to `FedAvg-Test ($\\alpha=0.1$)`. However, In Figure 2 (a), the training and testing curve with $\\alpha=0.1$ for FedAvg have a large gap.\nThe only difference between Figure 1 (b) and Figure 2 (a) I noticed so far is they use different dataset. However, how does it affect the gap between training and testing curve? Could any argument similar to lines 118-120 be given for Figure 2 (a)?",
" We thank you again for your thoughtful review and comments. With the author-reviewer period ending soon, we just wanted to reach out and see if any of the reviewers had any comments back to our rebuttal. We are looking for feedback on whether the points made in the reviews have now been addressed. We are happy to answer any remaining questions regarding our rebuttal or the paper itself. Thank you!",
" I thank the authors for their helpful and thorough response. Their clarifications and new experimental results (especially Table 9) have convinced me to raise my score. I would still like to see intuition on why eNTK representations perform better than the standard last-layer representations in future revisions.",
" We thank the reviewers for their careful reading of our paper and help with improving our manuscript. We sincerely appreciate that you find our work proposes '*a novel two-stage scheme*' (**Reviewer rWVN**, **Reviewer UZ2q**) and '*computationally more efficient*' method (**Reviewer 7oaK**), provides '*promising empirical performance*' (**Reviewer UZ2q**) and '*significant improvement by the BooNTK pipeline on all tasks involving a varying amount of label skew*', '*intuitive ablations and method exposition*' (**Reviewer 7oaK**), and '*helpful ablations*' (**Reviewer UZ2q**), and '*may inspire future work'* (**Reviewer UZ2q**).\n\nWe have modified our submission based on the suggestions/questions of the reviewers and have uploaded a revised version. Revised places are marked in blue color. In particular, we have made the following main updates:\n\n1. Added the centralized training results (in Table 1, Section 5).\n\n2. Added the results of BooNTK using full eNTK representations, i.e., without performing dimension reduction subsampling (in Table 8, Appendix B.5).\n\n3. Added the results of BooNTK with cross-entropy loss used in Stage 2 (in Table 8, Appendix B.5).\n\n4. Added the results of BooNTK using the representations before the last layer and cross-entropy loss (in Table 9, Appendix B.5).\n\n5. Compared with FedNova and add the comparison results to Table 10 (Appendix B.5).\n\n6. Added the comparison of the performance of BooNTK and existing methods when the number of clients is large in Table 11 (Appendix B.5).\n",
" We thank the reviewers for their careful reading of our paper and help with improving our manuscript. We sincerely appreciate that you find our work proposes '*a novel two-stage scheme*' (**Reviewer rWVN**, **Reviewer UZ2q**) and '*computationally more efficient*' method (**Reviewer 7oaK**), provides '*promising empirical performance*' (**Reviewer UZ2q**) and '*significant improvement by the BooNTK pipeline on all tasks involving a varying amount of label skew*', '*intuitive ablations and method exposition*' (**Reviewer 7oaK**), and '*helpful ablations*' (**Reviewer UZ2q**), and '*may inspire future work'* (**Reviewer UZ2q**).\n\nIn what follows, we try to address your concerns/questions and provide a detailed item-by-item response to your comments.\n\n======================================================================================\n\n>**Q1**: *Not enough intuition or empirical evidence is provided for why the second stage of the algorithm should consist of linear regression on the NTK-transformed data points rather than simply fixing the first L-1 layers and running lear regression to learn the paramters of the last layer with MSE loss, or learn it with multi-class logistic regression and cross entropy loss as is conventional. These approaches should be compared against as baselines and intuition should be provided as to why they are not used.*\n\n**A1**: Thank you for your suggestion. We have conducted new experiments on using the last layer representations and cross-entropy loss in the Stage 2 of BooNTK, and the results are summarized in Table 9, Appendix B.5 of our revised submission. From Table 9, we find that applying eNTK representations with high dimension outperforms using the representations before the last layer only, especially in the settings with high degrees of data heterogeneity. These results provide further evidence on applying eNTK features instead of the representations before the last layer features. We will include these results in our final version.\n\n>**Q2**: On a similar note, results on the computational cost of the proposed method should be included. It seems to be much larger than the baselines due to mapping every data point to a high dimension via parameter derivative computation.\n\n**A2**: Thank you for pointing this out. \nIn terms of communication cost, due to the subsampling step in BooNTK, BooNTK actually reduces 100x communication costs compared to existing methods on the CIFAR10 and CIFAR100 datasets (#parameter of the whole model: 11,169,345, #parameters of the subsample eNTK: 100,000). \nIn terms of training, during Stage 2 of BooNTK, every client only needs to compute the eNTK representations once and use the computed representations to learn the linear model. The computing eNTK representations step is negligible compared to the overall training.\nIn terms of inference, as pointed out in Q2 from Reviewer rWVN, the inference of our approach can be 3-5 times slower than standard with our method, and this can be a problem. However, 1) modern autograd libraries (e.g. JAX), and 2) computationally efficient approximations (see https://arxiv.org/abs/2004.0552) may significantly close this gap. \nWe will include more results and discussions on the computation cost of BooNTK and speeding up inference of BooNTK for future work in our final version.\n\n>**Q3**: *Personalized FL has been shown to be an effective alternative to learning a single global model in data heterogeneous settings. As such, some personalized FL methods should also be compared against.*\n\n**A3**: Personalization requires that each client has a separate (possibly different) test dataset. Here, we investigate the setting where all clients want to do well on the “entire” test dataset, not just on their local dataset. E.g. suppose each client has a single class data (client k has class k). A personalized model which trivially always answers k has a perfect personalized accuracy. Here, we are interested in solving the more challenging problem of each client being able to classify all classes. See lines 25-30 for motivation. ",
" >**Q4**: *Also, the experimental results would be strengthened by comparison against more recent FL methods that learn a single global model, e.g. [23,63]. Comparing with only 3 baselines is low for an empirical paper.*\n\n**A4**: In addition to the three baselines in Table 1, in our initial submission, we also compared our method to FedAdam [54] and FedDyn [1] in the appendix (Table 3 in Appendix B.1). \n\nThank you for suggesting new baselines. FedNova [63] is mainly useful when the number of updates on the clients are very different, but has no effect when each client has a similar number of datapoints. In our most challenging setting of C=#1 and C=#2, all clients have a similar number of datapoints. As suggested, we have conducted new experiments on comparing with FedNova in Table 10, Appendix B.5 of our revised submission. As shown in Table 10, BooNTK significantly outperforms FedNova on CIFAR10/100 in highly non-iid settings. Regarding server momentum [23], FedAdam has been shown to outperform it in recent work [53]. Hence we chose to compare against FedAdam. We will include the results of FedNova and server momentum in our final version.\n\n\n>**Q5**: *Missing related works: Huang et al., 2021 and Yue et al., 2021 employ the NTK for FL.*\n\n**A5**: Thank you for pointing out these references. We have added [Huang et al., 2021] and [Yue et al., 2021] to Section 4.1 of our revised submission.\n\n>**Q6**: *Intuition on why it makes sense to run linear regression for classification problems rather than logistic regression would be helpful.*\n\n**A6**: Thank you for pointing this out. Linear regression is easier than logistic regression in federated learning because the Hessian remains constant [Woodworth et al. 2020]. Furthermore, quadratic loss may sometimes work better even in the centralized setting [Achille et al. 2021].\n\nAlso, we have conducted new experiments on comparing the performance of quadratic loss and cross-entropy loss for BooNTK in Table 8, Appendix B.5 (BooNTK-CE) of our revised submission. As shown in Table 8, we find quadratic loss indeed achieves better performance than cross-entropy loss for BooNTK. We will include these results in our final version.\n\n[Woodworth et al. 2020] Woodworth, Blake E., Kumar Kshitij Patel, and Nati Srebro. \"Minibatch vs local sgd for heterogeneous distributed learning.\" https://arxiv.org/abs/2006.04735 NeurIPS 202. \n\n[Achille et al. 2021] Achille, Alessandro, et al. \"Lqf: Linear quadratic fine-tuning.\" https://arxiv.org/abs/2012.11140 CVPR 2021.\n\n>**Q7**: *Since BootNTK gets to pretrain using FedAvg for T_1 rounds, fair comparison with the other methods should allow them to train for an extra T_1 rounds.*\n\n**A7**: For all comparison experiments in our submission, we set the same number of communications rounds for our proposed method and existing methods. Specifically, we set T_1=T_2=100 for BooNTK on all datasets, and we run T_1 + T_2=200 rounds for existing methods. Further, we also compared the train/test curves of BooNTK and existing methods in Figure 6, Appendix B.3 in our initial submission. Thank you for pointing this out, and we will highlight and clarify this in our final version. \n\n>**Q8**: *SCAFFOLD has higher communication and computational cost than fedavg by constant factor due to computing and communicating the gradient correction terms (footnote b).*\n\n**A8**: As mentioned in Appendix A.1 of our initial submission, we describe a more communication efficient implementation of SCAFFOLD which is equivalent to Option II of SCAFFOLD [35]. Our implementation only requires a single model to be communicated between the client and server each round, making its communication complexity exactly equivalent to that of FedAvg.\n\n>**Q9**: *Or, if there are many clients and fewer samples per client, does locally optimizing the high-dimensional linear regression diverge?*\n\n**A9**: The linear regression should not be affected by small amounts of data. In fact, even in our setting, the linear model is overparameterized, with more parameters than data points per client. \n\nWe also conducted new experiments on the large number of clients setting, where we considered the number of clients K=50 on CIFAR100 with alpha=0.001. The results are summarized in Table 11, Appendix B.5 of our revised submission. We find our proposed method (BooNTK: 45.32%) significantly outperforms existing methods (best: 16.70% test accuracy). Thank you for your suggestion on the large number of clients experiments, and we will include these results in our final version.\n",
" We thank the reviewers for their careful reading of our paper and help with improving our manuscript. We sincerely appreciate that you find our work proposes '*a novel two-stage scheme*' (**Reviewer rWVN**, **Reviewer UZ2q**) and '*computationally more efficient*' method (**Reviewer 7oaK**), provides '*promising empirical performance*' (**Reviewer UZ2q**) and '*significant improvement by the BooNTK pipeline on all tasks involving a varying amount of label skew*', '*intuitive ablations and method exposition*' (**Reviewer 7oaK**), and '*helpful ablations*' (**Reviewer UZ2q**), and '*may inspire future work'* (**Reviewer UZ2q**).\n\nIn what follows, we try to address your concerns/questions and provide a detailed item-by-item response to your comments.\n\n======================================================================================\n\n>**Q1**: No discussion about how different non-iid ness settings affect the method. However, not all non-iidness is just label skew; for example, you could consider an image recognition system running in different mobile phones. As each phone comes with its own camera sensor, covariate shift (i.e., different $p(x)$ across clients) can also manifest as a source of non-iidness. \n \n**A1**: Past research (e.g. [Li et al. 2021]) has shown that label skew is the most challenging form of heterogeneity, and hence we focused on this in our work. Further, label skew automatically results in all kinds of covariate shifts since the classes naturally have very different features. For example, the different classes in CIFAR10/100 have different colors, textures, and camera technologies used (refer to Figure 7, Appendix B.5 for visually comparing pictures of ‘cat’ with ‘ship’ in CIFAR10). Having said this, we agree that there are other interesting notions of covariate shift whose effect we do not explore. This is a strong limitation of current evaluation methods in federated learning in general - we need new real world datasets with more interesting and realistic heterogeneities in order to carry out such investigations.\n \n[Li et al. 2021] Qinbin Li, Yiqun Diao, Quan Chen, and Bingsheng He. Federated learning on non-iid data silos: An experimental study. arXiv preprint arXiv:2102.02079, 2021\n\n\n>**Q2**: *No discussion about how the linear model is less flexible. As a result, its improved performance (on a given feature set) could be just because it has less degrees of freedom to adapt to the non-i.i.d. peculiarities.*\n\n**A2**: Thank you for your suggestion on discussing the limitation of the linear model. We will include additional discussions on limitations of the linear model part in our final version. \n\nWe politely disagree with the reasoning of your argument ‘its improved performance could be just because it has less degrees of freedom to adapt to the non-i.i.d. peculiarities’. The linear model has just as many degrees of freedom as the neural network since it has the same number of parameters. In fact, as shown in Figure 2(c), Figure 5, and Figure 6, the train accuracy of BooNTK is almost 100% training across different settings, which means that our gain cannot be because of this.\n\n>**Q3**: *There are several approximations involved in the computation of the eNTK features, the impact of which is unclear. In section 4.1 the authors describe a series of approximations to eNTK in order to reduce the computational burden, however there is no (empirical) evaluation on how each one affects the final performance. Furthermore, how does BooNTK perform without the approximations to the eNTK (i.e., do the approximations have beneficial regularisation effects or are they detrimental for performance)?*\n\n**A3**: Thank you for your suggestion. We described two approximations in Section 4.1: (1). randomly reinitialize the last layer and only consider with respect to a single output logit; (2). subsample random coordinates from the eNTK feature. As explained in Line 150 - Line 157 in Section 4.1, the first approximation does not sacrifice the representation power. \n\nAs suggested, we have conducted new experiments on applying full eNTK representation in BooNTK - Stage 2 (without random sampling) to investigate the role of the second approximation. We provide the results in Table 8, Appendix B.5 (BooNTK-full-eNTK). As shown in Table 8, applying full eNTK representations slightly improves (improvements are smaller than 2% across all settings) the performance of BooNTK on CIFAR10/100. On the other hand, using subsampled eNTK *reduces the communication cost more than 100x* compared to the full eNTK and existing federated learning algorithms (#parameter of the whole model: 11,169,345, #parameters of the subsample eNTK: 100,000). We will include the full eNTK results as well as additional discussions on the communication costs in our final version.\n",
" >**Q4**: *How would, e.g., the performance of a pipeline similar to BooNTK fare if instead of the linear model fitting step (i.e., stage 2), one just switched from FedAvg to FedProx with a strong regularisation strength for stage 2?*\n\n**A4**: Thank you for your suggestion on the experiments. We have conducted new experiments on switching from FedAvg to FedProx on CIFAR10 (#C=2). By sweep over the the regularization parameter from {10.0, 1.0, 0.1, 0.01, 0.001} for the FedProx (after switch from FedAvg), the best performance is 58.74%, which is only slightly better than FedAvg (56.86%) and FedProx (56.87%) and much worse than BooNTK (83.02%). These results suggest that the gain is not due to the regularization. \n\n\n>**Q5**: *The authors do not take into account that the linear model is less flexible by design, therefore it is harder to fit the non-iid peculiarities.*\n\n**A5**: We disagree with the argument ‘it is harder to fit the non-iid peculiarities’. As shown in Figure 2(c), Figure 5, and Figure 6, the train accuracy of BooNTK is almost 100% training accuracy across different settings.\n\n>**Q6**: How does finetuning just the classification layer (while keeping the rest of the network frozen) with SCAFFOLD work, relative to the eNTK approach (which requires further approximations)? \n\n**A6**: Thank you for your suggestions on experiments. We have conducted new experiments on using the last layer representations and cross-entropy loss in the Stage 2 of BooNTK, and the results are summarized in Table 9, Appendix B.5 of our revised submission. From Table 9, we find that applying eNTK representations with high dimension outperforms using the representations before the last layer only, especially in the settings with high degrees of data heterogeneity. These results provide further evidence on applying eNTK features instead of the representations before the last layer features. We will include these results in our final version.\n\n\n>**Q7**: The authors consider classification tasks, however for the second stage of BooNTK, they consider an MSE loss on an one-hot representation of the targets. What is the motivation of the MSE loss? Intuitively, you should be able to apply a softmax on the linear model of the first eq. at sec. 4.1 to get a logistic regression model, which is more appropriate for classification.\n\n**A7**: Thank you for pointing this out. Linear regression is easier than logistic regression in federated learning because the Hessian remains constant [Woodworth et al. 2020]. Furthermore, quadratic loss may sometimes work better even in the centralized setting [Achille et al. 2021].\n\nAlso, we have conducted new experiments on comparing the performance of quadratic loss and cross-entropy loss for BooNTK in Table 8, Appendix B.5 (BooNTK-CE) of our revised submission. As shown in Table 8, we find quadratic loss indeed achieves better performance than cross-entropy loss for BooNTK. We will include these results in our final version.\n\n[Woodworth et al. 2020] Woodworth, Blake E., Kumar Kshitij Patel, and Nati Srebro. \"Minibatch vs local sgd for heterogeneous distributed learning.\" https://arxiv.org/abs/2006.04735 NeurIPS 202. \n\n[Achille et al. 2021] Achille, Alessandro, et al. \"Lqf: Linear quadratic fine-tuning.\" https://arxiv.org/abs/2012.11140 CVPR 2021.",
" We thank the reviewers for their careful reading of our paper and help with improving our manuscript. We sincerely appreciate that you find our work proposes '*a novel two-stage scheme*' (**Reviewer rWVN**, **Reviewer UZ2q**) and '*computationally more efficient*' method (**Reviewer 7oaK**), provides '*promising empirical performance*' (**Reviewer UZ2q**) and '*significant improvement by the BooNTK pipeline on all tasks involving a varying amount of label skew*', '*intuitive ablations and method exposition*' (**Reviewer 7oaK**), and '*helpful ablations*' (**Reviewer UZ2q**), and '*may inspire future work'* (**Reviewer UZ2q**).\n\nIn what follows, we try to address your concerns/questions and provide a detailed item-by-item response to your comments.\n\n======================================================================================\n\n>**Q1**: *The convex formulation clearly sacrifice the model capacity. The feature learning can only happen in first stage (or non-convex part), thus BookNTK learns less feature than centralized training, which doesn't involve two-stage training.*\n\n**A1**: This is a good point, as you say, the second phase of BooNTK does not perform feature learning and only the first phase of BooNTK performs feature learning. However, as shown in Figure 6, Appendix B.3, we study the performance of BooNTK with a different number of rounds ($T_1$) for phase1. We find that using larger $T_1$ does not significantly improve the test accuracy of BooNTK when $T_1$ is larger than 60, which suggests that the feature learning saturates after ~60 communication rounds in phase 1 of BooNTK. Moreover, BooNTK leverages the effective features learned in phase 1 and significantly improves the model performance compared to existing approaches. \n\nMeanwhile, our convex formulation does not sacrifice the model capacity. For example, BooNTK achieves ~100% training accuracy in phase 2, as shown in Figure 5 and Figure 6. In contrast, BooNTK aims to maximize the usage of the features learned in phase 1 by using more features (i.e., overparameterized linear model). \n\nLastly, as shown in our experimental results, BooNTK achieves fast convergence in Stage 2 (less than 20 communication rounds). This suggests that our proposed convexify approach could serve as an effective approach, that is complementary to existing methods, for tackling the heterogeneity issue in federated learning.\n \n>**Q2**: *If I understand correctly, the final training result for BookNTK is a neural network plus a linear model. Is there a way to compile the final network and the linear model into a single model that support inference with a single forward pass?*\n\n**A2**: Yes, this is correct. Indeed, the inference of our approach can be 3-5 times slower than standard with our method, and this can be a problem. However, 1) modern autograd libraries (e.g. JAX), and 2) computationally efficient approximations (see https://arxiv.org/abs/2004.0552) may significantly close this gap. We will include more discussions on speeding up our proposed approach in our final version.\n \n>**Q3**: *Section 3 mentioned \"the train and test accuracies in Figure 1(b) match quite closely, suggesting that the failure lies in optimization\". However Figure 2 shows a different picture, where train acc and test acc have a large gap. What's the reason behind this discrepancy?*\n\n**A3**: Thank you for pointing this out. FedAvg and SCAFFOLD show a larger gap only in the easy nearly-iid setting, where training is not a big issue. The setting in Figure 1(b) – where each client only has samples from 2 classes – has a much higher degree of non-iid-ness than the ones in Figure 1(b). We will incorporate your suggestion and improve the clarity of our presentation in our final version.",
" >**Q4**: Why quadratic loss instead of cross-entropy is used in (1)?\n\n**A4**: Thank you for pointing this out. Linear regression is easier than logistic regression in federated learning because the Hessian remains constant [Woodworth et al. 2020]. Furthermore, quadratic loss may sometimes work better even in the centralized setting [Achille et al. 2021].\n\nAlso, we have conducted new experiments on comparing the performance of quadratic loss and cross-entropy loss for BooNTK in Table 8, Appendix B.5 (BooNTK-CE) of our revised submission. As shown in Table 8, we find quadratic loss indeed achieves better performance than cross-entropy loss for BooNTK. We will include these results in our final version.\n\n[Woodworth et al. 2020] Woodworth, Blake E., Kumar Kshitij Patel, and Nati Srebro. \"Minibatch vs local sgd for heterogeneous distributed learning.\" https://arxiv.org/abs/2006.04735 NeurIPS 202. \n\n[Achille et al. 2021] Achille, Alessandro, et al. \"Lqf: Linear quadratic fine-tuning.\" https://arxiv.org/abs/2012.11140 CVPR 2021.\n\n>**Q5**: *How much worse is BookNTK than centralized training? I strongly suggest authors to add centralized training result for a comparison.* \n\n**A5**: Thank you for your great suggestion. We have added the results of centralized training to our main comparison table (Table 2 in Section 5) in our revised submission. Overall, BooNTK performs worse than the centralized training, especially in the highly non-iid setting. However, the gaps between BooNTK and centralized training are much smaller than existing methods. We will include the centralized training results in our final version.\n",
" This paper focuses on difficulty introduced by non-convexity and data heterogeneity in federated learning.\nAuthors first show that, with data heterogeneity, linear models and convex optimization problem can be trained efficiently with gradient correction techniques such as SCAFFOLD, while non convex problem cannot. \nThen, in order to sidestep the non-convexity for neural network, authors substitute original model with a linear approximation and original loss with a quadratic loss, such that non-convex optimization problem turns into a convex regression problem based on NTK. \nSince feature learning is necessary for NTK align with data to provide good result, authors introduce two-stage training, where feature learning only happens in first stage, and convex optimization with gradient correction happens in second stage. \nAuthors conduct experiments to show that such two-stage optimization can fit the data faster and produce better test accuracy. Strength:\n\nThis paper introduced a novel two-stage scheme to combine the feature learning capacity of neural network and for efficient optimization linear model.\n\nWeakness:\n\nThe convexified problem is introduced mainly due to an optimization consideration: SCAFFOLD perform well on convex method. However, the convex formulation clearly sacrifice the model capacity. The feature learning can only happen in first stage (or non-convex part), thus BookNTK learns less feature than centralized training, which doesn't involve two-stage training.\n\n If I understand correctly, the final training result for BookNTK is a neural network plus a linear model. Any inference requires a backward propagation to derive the feature for empirical NTK to apply a linear model. This is different from common workflow. Is there a way to compile the final network and the linear model into a single model that support inference with a single forward pass?\n\nSection 3 mentioned \"the train and test accuracies in Figure 1(b) match quite closely, suggesting that the failure lies in optimization\". However Figure 2 shows a different picture, where train acc and test acc have a large gap. What's the reason behind this discrepancy?\n\nWhy quadratic loss instead of cross-entropy is used in (1)?\n\nHow much worse is BookNTK than centralized training? Authors only mentioned one centralized baseline in section 3, where they show that \"21% (out of the 35% gap in accuracy) may be attributed to a failure to optimize the linear output layer\". However, for BookNTK, none of results in section 5 and appendix contains centralized training. I think this question is as important as \"How much better is BookNTK than FedAvg\". I strongly suggest authors to add centralized training result for a comparison. Yes.",
" This work proposes BooNTK; a two stage approach for FL that allows for better performance in cross-silo settings. The main idea is to 1) perform standard FedAvg training on a non-convex model, such as a neural network, then 2) approximate it with a first order Taylor approximation around the parameters found and finally 3) optimize the parameters of this linear model with federated training using optimisers with gradient correction, such as SCAFFOLD. The authors motivate this through the optimization difficulties of non-convex models in the non-iid setting; empirically, the loss of performance when moving to non-idd data is larger for non-convex model compared to convex.\n\nThe authors describe several further approximations that they do to make the method more efficient (i.e., removing the bias term of the Taylor approximation and only considering a subset of the coordinates of the gradient vector) and then demonstrate BooNTK’s performance on three label skew non-iid settings on FMNIST, CIFAR10 and CIFAR100.\n Strengths\n- Good results on the cross-silo setting\n - There is significant improvement by the BooNTK pipeline on all tasks involving a varying amount of label skew.\n- Intuitive ablations and method exposition\n - The method is explained / motivated well and I liked the layer importance investigation. The ablation studies are also useful and highlight the sensitivity of the method on the choice of (some) hyper parameters.\n- Simple method\n - The method is quite simple and straightforward. As a bonus, the second step is also computationally more efficient than training the original neural network.\n\nWeaknesses:\n- No discussion about how different non-iid ness settings affect the method\n - Given the claims of the work about the negative effects of data heterogeneity in the non-convex setting, I would have expected that the authors experimented with more diverse non-iid settings (i.e., not just label skew). As the current label-skew experiments concern mostly different marginals over the labels, $p(y)$, at each client, a better adjusted classification layer is important. I believe this is one of the reasons that BooNTK is effective in these scenarios. However, not all non-iidness is just label skew; for example, you could consider an image recognition system running in different mobile phones. As each phone comes with its own camera sensor, covariate shift (i.e., different $p(x)$ across clients) can also manifest as a source of non-iidness. In this case, I would intuitively expect that the distribution of the features also differs among clients, therefore, adjusting just the classifier might not be enough.\n- No discussion about how the linear model is less flexible \n - The linear model is less flexible than the non-convex neural network. As a result, its improved performance (on a given feature set) could be just because it has less degrees of freedom to adapt to the non-i.i.d. peculiarities. I believe a discussion around this is missing.\n- There are several approximations involved in the computation of the eNTK features, the impact of which is unclear\n - In section 4.1 the authors describe a series of approximations to eNTK in order to reduce the computational burden, however there is no (empirical) evaluation on how each one affects the final performance. \n Most of my questions revolve around the weaknesses described above.\n\n1. The authors do not take into account that the linear model is less flexible by design, therefore it is harder to fit the non-iid peculiarities (and could explain its improved performance). I believe a control experiment would highlight whether this can be an issue in practice. For example, one could do a heavily regularised non-convex model to see if the gap is shrinking in Figure 1. How would, e.g., the performance of a pipeline similar to BooNTK fare if instead of the linear model fitting step (i.e., stage 2), one just switched from FedAvg to FedProx with a strong regularisation strength for stage 2?\n\n2. I believe that the effect of various sources of non-iidness on BooNTK should be investigated, as the experiments consider only label skew (where generally adjusting the classifier only is sufficient). What happens when there is, e.g., covariate shift? Intuitively, this should affect the earlier layers more.\n\n3. I believe some more investigation on the eNTK part of BooNTK is required. For example, how does finetuning just the classification layer (while keeping the rest of the network frozen) with SCAFFOLD work, relative to the eNTK approach (which requires further approximations)? Furthermore, how does BooNTK perform without the approximations to the eNTK (i.e., do the approximations have beneficial regularisation effects or are they detrimental for performance)?\n\n4. The authors consider classification tasks, however for the second stage of BooNTK, they consider an MSE loss on an one-hot representation of the targets. What is the motivation of the MSE loss? Intuitively, you should be able to apply a softmax on the linear model of the first eq. at sec. 4.1 to get a logistic regression model, which is more appropriate for classification. \n The authors could have spent a bit more time discussing potentially negative aspects of their method (e.g., the linearity of the model in the second stage).",
" This paper makes two main contributions: 1) empirically observing that the early layers of neural networks trained by FedAvg learn useful features even in heterogeneous data settings, while the performance degradation due to data heterogeneity is due to ineffective later layers, and 2) based on this observation, proposing a novel two-stage cross-silo federated learning method, BooNTK, that first uses FedAvg to learn early-layer features, applies a particular transformation based on these features to all data points, then learns a linear classifier on top of the transformed data using SCAFFOLD. \n\nTo verify that FedAvg learns useful early-layer features, FedAvg is run on a heterogeneous image dataset, then the last $\\ell$-many layers are retrained in a centralized manner while the earlier layers are held fixed. It is observed that using the FedAvg-pretrained early layers leads to huge performance improvement over using random weights for the early layers when the last layer is retrained centrally. Also, there is some improvement due to using the FedAvg-pretrained vs random weights when the last $\\ell$-many layers are retrained centrally for all $\\ell$, suggesting that the early layers learned by FedAvg are extracting useful information. This observation inspires the first stage of the proposed method: FedAvg pretraining to learn the early layer weights. The paper also observes that SCAFFOLD is robust to data heterogeneity when the client losses are convex (but is not robust when the losses are nonconvex). This inspires the second stage of the proposed method: applying SCAFFOLD to convex losses to learn the last linear layer of the network. In particular, the second stage approach consists of computing the neural tangent kernel (NTK) representation of each data point using the pretrained weights in the NTK computation. Then, SCAFFOLD is employed to solve a multi-output linear regression with the transformed data points as inputs and the one-hot encoded labels as the target vectors. Empirical results are provided showing large improvement in training and testing accuracy of the proposed method over FedAvg, FedProx and SCAFFOLD on CIFAR100, FEMNIST, and MNIST with very heterogeneous partitions.\n Strengths\n\n- The paper makes interesting empirical observations that are relevant to NeurIPS and may inspire future work. In particular the observation that FedAvg learns useful features even in data heterogeneous settings is novel and helps to explain the empirical success of FedAvg plus fine-tuning.\n\n- These observations motivate a new federated learning method with promising empirical performance.\n\n- The experimental evaluation is mostly thorough, with multiple datasets tested and helpful ablations.\n\n- The writing is clear.\n\nWeaknesses\n\n1. Not enough intuition or empirical evidence is provided for why the second stage of the algorithm should consist of linear regression on the NTK-transformed data points rather than simply fixing the first L-1 layers and running lear regression to learn the paramters of the last layer with MSE loss, or learn it with multi-class logistic regression and cross entropy loss as is conventional. Both of the latter approaches maintain convexity of the loss functions, are much simpler to implement, and involve an optimization over far fewer parameters (presumably the output of the last layer has dimension far less than p=100,000 as in the NTK approach). These approaches should be compared against as baselines and intuition should be provided as to why they are not used. \n\n2. On a similar note, results on the computational cost of the proposed method should be included. It seems to be much larger than the baselines due to mapping every data point to a high dimension via parameter derivative computation.\n\n3. Personalized FL has been shown to be an effective alternative to learning a single global model in data heterogeneous settings. As such, some personalized FL methods should also be compared against. Also, the experimental results would be strengthened by comparison against more recent FL methods that learn a single global model, e.g. [23,63]. Comparing with only 3 baselines is low for an empirical paper.\n\nMinor notes\n- Missing related works: Huang et al., 2021 and Yue et al., 2021 employ the NTK for FL.\n\n- Intuition on why it makes sense to run linear regression for classification problems rather than logistic regression would be helpful.\n\n- Since BootNTK gets to pretrain using FedAvg for T_1 rounds, fair comparison with the other methods should allow them to train for an extra T_1 rounds.\n\n-Cross-device FL may still have data heterogeneity\n\n- SCAFFOLD has higher communication and computational cost than fedavg by constant factor due to computing and communicating the gradient correction terms (footnote b).\n\n- “By taking an eNTK approximation, BooNTK optimizes a convex approximation while using information from all layers of the model.” - besides the last layer\n\nHuang et al., FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Convergence Analysis, https://arxiv.org/pdf/2105.05001.pdf, 2021.\nYue et al., Neural Tangent Kernel Empowered Federated Learning, https://arxiv.org/pdf/2110.03681.pdf, 2021.\n\n Why is the proposed approach limited to the cross-silo setting, with a small number of clients? It seems to me that it should also work for settings with many clients. Or, if there are many clients and fewer samples per client, does locally optimizing the high-dimensional linear regression diverge?\n The authors should discuss the computational complexity of the proposed method."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"P7wEXchyNyl",
"pP1YH-nxZ-",
"krFURGWt-7T",
"pIJkqnVIevA",
"nips_2022_jzd2bE5MxW",
"xl_VJUaO0HF",
"nips_2022_jzd2bE5MxW",
"YRDX79Rnge",
"YRDX79Rnge",
"CoYcAC3Ml3y",
"CoYcAC3Ml3y",
"_vC1nFDhB8d",
"_vC1nFDhB8d",
"nips_2022_jzd2bE5MxW",
"nips_2022_jzd2bE5MxW",
"nips_2022_jzd2bE5MxW"
] |
nips_2022_OxfI-3i5M8g | Scalable Neural Video Representations with Learnable Positional Features | Succinct representation of complex signals using coordinate-based neural representations (CNRs) has seen great progress, and several recent efforts focus on extending them for handling videos. Here, the main challenge is how to (a) alleviate a compute-inefficiency in training CNRs to (b) achieve high-quality video encoding while (c) maintaining the parameter-efficiency. To meet all requirements (a), (b), and (c) simultaneously, we propose neural video representations with learnable positional features (NVP), a novel CNR by introducing "learnable positional features" that effectively amortize a video as latent codes. Specifically, we first present a CNR architecture based on designing 2D latent keyframes to learn the common video contents across each spatio-temporal axis, which dramatically improves all of those three requirements. Then, we propose to utilize existing powerful image and video codecs as a compute-/memory-efficient compression procedure of latent codes. We demonstrate the superiority of NVP on the popular UVG benchmark; compared with prior arts, NVP not only trains 2 times faster (less than 5 minutes) but also exceeds their encoding quality as 34.07$\rightarrow$34.57 (measured with the PSNR metric), even using $>$8 times fewer parameters. We also show intriguing properties of NVP, e.g., video inpainting, video frame interpolation, etc.
| Accept | The paper proposes a coordinate-based architecture for representing a video in a parameter-efficient and computationally efficient manner. Such architecture can be used for video compression, inpainting and frame interpolation.
The initial concerns were addressed during the rebuttal period and all the reviewers had a positive opinion of the paper. I therefore recommend acceptance. | train | [
"di0GkRzvpw5",
"WFcfavFTmUH",
"prMEV5l3wXY",
"yi9yye93t_4",
"L4RGdu7PK1v",
"5zUxMvUcxwX",
"q6cLtZVtENG",
"eMBadi3LXgA",
"jWgxQGbbMK",
"AssIQQ89b_G",
"iVWeYWNYNwr",
"xRB1tU4RzXI",
"WKcX34hI4SB",
"Xc81zRU3pKT",
"GIHYK7Vhstz",
"8jae65LSbxF",
"jAhoqJ3fvR3",
"DAFO2_y9QS",
"sNf6xpmXSPy",
"dqgAN-6_b_"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response! We are happy to find that our rebuttal successfully addressed all of your concerns. Although our primary focus is not video compression, we also think that providing the comparison results with deep video codecs under the fair evaluation condition would further strengthen our paper and give a good insight into future research in both CNRs and video compression. Thank you for the suggestion, and we will add such a comparison in the final version of the manuscript. \n",
" Many thanks to authors who provide more detailed experimental results and have refined overall writting following reviewers' comments. The response from authors well clarifies most of my concerns and I want to keep my initial decision of acceptance. If possible I want to see more competing results with deep video codec under a fair evaluation condition, although I understand that compression is not the major goal.",
" Dear reviewers and AC,\n\nThank you for your time and efforts in reviewing our manuscript. \n\nThis letter is to notice that we have made an additional update to the revision, which further incorporates the comments from Reviewer kkNU during the discussion period: \n\n- Adjust boldfacing of the ablation study result (Table 3)\n- Additional ablation study on the role of each component in its early training stage (Table 3)\n- Clarification of the role of modulated implicit function (Section 3.1, Section 4.3)\n\nThese updates are temporarily highlighted in “red” upon the previous revision.\n\nIf you have time, please check out the updated manuscript and the individual responses, and let us know if there are any additional concerns or questions. We will be happy to respond to your further comments during the remainder of the author discussion period.\n\nThanks, \nAuthors \n",
" Thank you for your response and for providing additional feedback on our manuscript! We decided to further address your comments with the additional experiments and the revision, which are temporarily marked as “red” for your convenience.\n\nAs you pointed out, the effect of modulation is often marginal in its final training stage. In this respect, we fully agree that boldfacing the last two lines in Table 3 would be more clear for readers, and we updated Table 3 accordingly following your suggestion.\n\nNevertheless, we do remark that the modulated implicit function is beneficial to achieving high-quality encoding within a small number of training steps, e.g., necessary for real-time or mobile applications. To validate this, we conducted additional experiments to highlight the role of modulated implicit function at early training iterations. As shown in the below table, similar to the other two components of NVP, modulated implicit function improves encoding quality measured in PSNR metric (higher is better) from 32.15 to 34.85 compared with a naive multilayer perceptron after 1,500 iterations ($\\approx$7 minutes). We updated Section 3.1, Section 4.3, and Table 3 to clarify and add the importance/role of the modulated implicit function. Thank you very much again and we believe that this further strengthens our paper.\n\n\n\\begin{array}{c c c c c}\n\\hline\n\\text{Keyframes} & \\text{Sparse feat.} & \\text{Module.} & \\text{\\\\# Params.} & \\text{PSNR (1.5K)} & \\text{PSNR (150K)} \\newline\n\\hline\n\\color{red}\\tt{X} & \\color{green}\\checkmark & \\color{green}\\checkmark & \\text{136M} &29.95\\small{\\pm2.69} & 31.21\\small{\\pm2.80} \\newline\n\\color{green}\\checkmark & \\color{red}\\tt{X} & \\color{green}\\checkmark & \\text{138M} &29.88\\small{\\pm4.99} & 32.44\\small{\\pm4.48}\\newline\n\\color{green}\\checkmark & \\color{green}\\checkmark & \\color{red}\\tt{X} & \\text{147M} &32.15\\small{\\pm3.08} & \\mathbf{38.04\\small{\\pm2.27}}\\newline\n\\hline\n\\color{green}\\checkmark & \\color{green}\\checkmark & \\color{green}\\checkmark & \\text{136M} & \\mathbf{34.85\\small{\\pm2.69}} & \\mathbf{38.89\\small{\\pm2.11}}\\newline\n\\hline\n\\end{array} \n",
" I'd like to thank the authors for the rebuttal. The rebuttal clarifies many of my concerns and I think this is a technically solid work with good validation thus, I'm happy to keep my initial recommendation.\n\nOne comment re boldfacing: when boldfacing one should take into account std estimates and not only look at the means. For example, when looking at the ablation study with std values one could clearly see that the benefit of modulation is not statistically significant and the last two lines in the table should be boldfaced. This suggest that the importance coming from the modulation might not be as large as suggested by the current text. Please correct the boldfacing accordingly and adjust the text accordingly if the paper gets accepted.\n\n",
" Dear Reviewer LDYR,\n\nWe again sincerely appreciate your efforts and time in reviewing and providing incisive comments on our paper.\n\nWe kindly remind you that the discussion period will end in two days. We believe that we successfully addressed your concerns, questions, and suggestions with the results of the supporting experiments and the revised manuscript.\n\nIf you have any further concerns, questions or suggestions, please do not hesitate to let us know.\n\nThank you very much!\n\nAuthors",
" Dear Reviewer kkNU,\n\nWe again sincerely appreciate your efforts and time in reviewing and providing incisive comments on our paper.\n\nWe kindly remind you that the discussion period will end in two days. We believe that we successfully addressed your concerns, questions, and suggestions with the results of the supporting experiments and the revised manuscript.\n\nIf you have any further concerns, questions or suggestions, please do not hesitate to let us know.\n\nThank you very much!\n\nAuthors",
" Dear Reviewer MBhC,\n\nWe again sincerely appreciate your efforts and time in reviewing and providing incisive comments on our paper. \n\nWe kindly remind you that the discussion period will end in two days. We believe that we successfully addressed your concerns, questions, and suggestions with the results of the supporting experiments and the revised manuscript. \n\nIf you have any further concerns, questions or suggestions, please do not hesitate to let us know.\n\nThank you very much!\n\nAuthors",
" Dear Reviewers,\n\nThank you for your time and efforts in reviewing our paper.\n\nWe believe that we sincerely and successfully address your concerns and questions, with the results of the supporting experiments. We also believe that our paper becomes much stronger through the clarification.\n\nIf you have any further concerns or questions, please do not hesitate to let us know and we will be happy to get back to you to clarify them.\n\nThank you very much!\n\nAuthors",
" Dear reviewers and AC,\n\nWe deeply appreciate your efforts and time in reviewing our manuscript. \n\nOur work proposes a novel coordinate-based neural representation (CNR) for videos, coined NVP, based on proposing learnable positional features that effectively amortize a given video as latent codes. As highlighted by reviewers, our paper is well-written (kkNU, MBhC), well-motivated to tackle the efficiency trilemma of video CNR (kkNU, MBhC), presents a new idea to learn the keyframes (LDYR, kkNU), while showing a solid empirical result (kkNU, MBhC). \n\nWe appreciate the reviewers’ insightful, incisive comments on our paper. To answer your concerns and questions, we have updated the manuscript and our project page (https://neurips2022-nvp.github.io) with the following additional experiments and clarification:\n\n- Showing more compelling properties (video inpainting and video super-resolution) of NVP (Section 4.2) other than video frame interpolation and video compression\n- Validation of our method on videos with larger temporal variations (Appendix F, project page) \n- Additional experiment on a larger video (high-resolution, long) to show the scalability of our method (Appendix F, project page) \n- Additional experiment on a Big Buck Bunny video, following the prior setup in NeRV [1] (Appendix F)\n- Revision of Abstract and Introduction to clarify the focus of our work (Section 1 and 2)\n- More extensive ablation studies to validate each component of NVP (Table 3)\n- Video-wise compression results in the UVG benchmark (Appendix G)\n\nWe temporarily highlighted these updates as \"blue\" for your convenience.\n\nThank you very much ! \nAuthors \n\n---\n[1] Chen et al., NeRV: Neural Representations for Videos, NeurIPS 2021. \n",
" We deeply appreciate your thoughtful comments and efforts in reviewing our manuscript. We mark our major revision in the revised manuscript with “blue.” We also illustrate several additional images and videos on our project page to answer your questions: https://neurips2022-nvp.github.io. We respond to each of your comments one-by-one in what follows. \n\n---\n\n**[W1] Combining traditional codecs is not compelling due to artifacts and bounds of the codecs on compression.**\n\nWe first emphasize that the primary goal of ours is NOT to develop a better compression method. Instead, we aim for designing a better coordinate-based neural representation (CNR) for videos that is highly parameter-/compute-efficient and provides high-quality encoding. Here, video compression is just one of many possible use cases of CNRs, such as video inpainting, video frame interpolation, super-resolution, etc.\n\nThere are indeed several works that exploited CNRs for the purpose of compression. What our paper asks, on the other hand, is somewhat the opposite: can we use traditional codecs to design a better CNR encoding scheme? In this respect, we rather think incorporating traditional codecs into video CNRs is a novel and strong feature of our method, not existing in other works. Moreover, combining these codecs and video CNRs even has the potential to remove the artifacts caused by traditional codecs, making this direction appealing even from the perspective of compression. For instance, Figure 9 shows that traditional codecs often suffer from the artifact of inconsistent encoding quality over different timesteps, whereas our method (combined with traditional codecs) achieves consistent encoding performance and mitigates such artifacts.\n\n---\n\n**[W2] The compression performance of NVP is worse than the traditional codecs.** \n\nWe agree that the compression performance of our method, NVP, is sometimes below state-of-the-art traditional codecs (e.g., H.264, HEVC) with respect to PSNR metrics. However, we remark that our method shows a better perceptual similarity (measured with the LPIPS metric) than traditional codecs (as shown in Figure 6). Hence, it is arguable to say which method is clearly superior to the other baselines. Moreover, as we previously mentioned (in [W1]), we would like to emphasize that our main focus is not video compression; our method provides other intriguing properties that the existing codecs cannot achieve, e.g., video inpainting, video frame interpolation, and super-resolution, etc. To clarify our main focus, we revised the manuscript (Section 1 and 3) to clarify our main goal and emphasize the benefits of CNRs. We also revised Section 4.2 to demonstrate various applications of our method as video CNRs. In particular, we additionally provide a video inpainting result (Figure 4) and a super-resolution result (Figure 5) of NVP; please refer to our project page for better visualization.\n\n---\n\n**[W3] The benchmark dataset (UVG) seems to mostly contain static videos and thus not a good benchmark.**\n\nWe first note that the UVG dataset also contains videos that are much more temporally dynamic than “Honeybee” (that you mentioned), such as “Jockey” or “ShakeNDry”: “Jockey” is a video of running horses, where the background and the poses of horses change rapidly (see the project webpage above for an excerpt); “ShakeNDry” is a video of an animal shaking its body and walking out of the camera frame. In this respect, we believe that the UVG dataset can be a more meaningful benchmark to evaluate video encoding.\n\n\nNevertheless, following your suggestion, we provide per-video performance analysis on these three videos (“Honeybee,” “Jockey,” and “ShakeNDry”), and give additional experimental results on other temporally dynamic videos; please refer to our [Q3] below.\n\n\n---\n\n**[W4] Lack of comparison with learning-based video compression methods.**\n\nAs we mentioned earlier (in [W1] and [W2]), our main focus is not on video compression and\nwe do not compare with state-of-the-art video compression methods. However, we think investigating our work further for compression is worth it, as our approach has the potential to mitigate the limitations of learning-based compression schemes. Recall that most learning-based video compression methods learn the compression procedure from video “datasets” (e.g., [1]), while our method requires and uses only a “single test video” for representing a video as succinct parameters. Hence, while existing methods mostly suffer from the distributional shift [2], our approach does not, since it works in a “zero-shot” manner. Not limited to, compared with traditional codecs, our method can provide other intriguing properties that the existing codecs cannot achieve, e.g., video inpainting, video frame interpolation, super-resolution, etc. We think finding a better method of video compression based on our work should be definitely an interesting future direction. \n\n---",
" \n**[Q1] Visualization and the information included in learnable latent keyframes.**\n\nOur keyframes aim to learn meaningful contents that are shared across a given video along an axis (we added visualization of these keyframes in Appendix E). For instance, the temporal axis ($\\mathbf{U}_{\\theta_\\mathtt{xy}}$) learns common contents in every timeframe, such as background and the watermark in the video which is invariant to timesteps.\n\nThe meanings of the other two keyframes ($\\mathbf{U\\}_{\\theta_\\mathtt{xt}}$ and $\\mathbf{U}\\_{\\theta_\\mathtt{xt}}$) may not be straightforward from their visualizations, as the shared contents across the other two directions in raw RGB space is often ambiguous. However, we note that we are learning the common contents in “latent space”; although they do not seem straightforward, they indeed play a crucial role in promoting the succinct parametrization of a given video. We also performed the ablation study to show the effect of these two keyframes; please refer to Figure 8 for the result.\n\n---\n\n**[Q2] How the BPP is calculated and the portions of bits of each component?** \n\nWe use 32-bit precision for modulated implicit function and 8-bit precision for learnable latent keyframes and sparse positional features: we consider such different precisions in calculating the BPP value of our method. The 8-bit precision of learnable latent keyframes and sparse positional features originates from their compression procedure of them after training, as we quantize them as 8-bit precision and leverage traditional image and video codecs (respectively) to reduce their size. Here, we note that we train the model using 32-bit precision for all latent features and the modulated implicit function before the compression.\n\nAfter operating our proposed compression scheme, the number of bits (or information) is mostly in the order of sparse positional features, learnable keyframes, and the modulated implicit function. For instance, for encoding Yachtride video with NVP-L model, each component uses the following number of bits (reported with kilobyte (kB):\n- Learnable latent keyframes: 18,147kB (30.8%)\n- Sparse positional features: 40,070kB (68.1%)\n- Modulated implicit function: 617kB (0.01%)\n\n---\n\n**[Q3] Does the proposed method achieve similar performance on videos with high temporal variations?**\n\nOur method achieves similar performance on videos with high temporal variations. As mentioned above (in [W3]), the UVG benchmark contains videos with dynamic motions, e.g., Jockey and ShakeNDry. On the project page (see also Figure 1), we exhibit how well our method, NVP, encodes Jockey with high quality while simultaneously achieving compute-/parameter-efficiency. We also exhibit on the webpage how well NVP succinctly encodes a ShakeNDry without suffering from any artifacts (see also Figure 3). \n\nNonetheless, to further address your concern, we provide the video-wise quantitative result of Jockey and ShakeNDry videos as well as the result of other videos not in the UVG benchmark that contains dynamic, complex motions by conducting additional experiments. In particular, we consider the following three new videos:\n- (Street) A timelapse video of a London street. People, cars, and buses are moving at different speeds as the traffic signal changes [3].\n- (City) A timelapse video of the city at night. Lots of cars are moving speedily [4].\n- (Surfing) A video of a man surfing in the ocean. Huge ocean waves are changing dramatically and fast [5].\n\nAs shown in the below table, our method also shows similar, or even better results in different, diverse datasets which are much more dynamic than the Honeybee video, validating the effectiveness of our method regardless of temporal variations. We also provide the visualization of these videos on our project page and added Appendix F in the revision to include these additional experiments.\n\n\\begin{array}{l cccccc}\n\\hline\n& \\text{Honeybee (static)} & \\text{Jockey} & \\text{ShakeNDry} & \\text{Street} & \\text{City} & \\text{Surfing} \\newline\n\\hline\n\\text{BPP} & 0.409 & 0.325 & 0.305 & 0.214 & 0.173 & 0.311 \\newline\n\\text{PSNR} & 38.58 & 38.19 & 37.67 & 38.90 & 38.45 & 43.71 \\newline\n\\hline\n\\end{array}\n\n---\n\n[1] Lu et al., DVC: An End-to-End Deep Video Compression Framework, CVPR 2019 \n[2] Agustsson et al., Scale-space flow for end-to-end optimized video compression, CVPR 2020 \n[3] https://pixabay.com/videos/id-28693/ \n[4] https://pixabay.com/videos/id-19627/ \n[5] https://pixabay.com/videos/id-110734/ ",
" We deeply appreciate your thoughtful comments and efforts in reviewing our manuscript. We mark our major revision in the revised manuscript with “blue.” We also illustrate several additional images and videos on our project page to answer your questions: https://neurips2022-nvp.github.io. We respond to each of your comments one-by-one in what follows. \n\n---\n\n**[W1] Editorial comments on Abstract, Introduction, and Method.**\n\nThank you for your insightful comment! As you suggested, we improved the writing and illustration as follows:\n- (Abstract) We explicitly highlight the role of latent keyframes and the contribution of our method.\n- (Introduction) We enumerate the numerous intriguing properties of CNRs in general (L26-27) and video CNRs (L32) so that readers better understand the motivation for developing video CNRs. \n- (Method) We re-write several sentences (e.g., L134-135) on latent keyframes and fix the set notation to prevent confusion.\n\n---\n\n**[W2] How, where is the “5 min” in Introduction measured?** \n\nThank you for pointing this out. It is evaluated under a single V100 32GB GPU and 28 instances from a virtual CPU of Intel Xeon Platinum 8168 GPU, where the time is measured from the time when the first iteration starts for each of the baselines. We also clarified this in the revision (L74). \n\n---\n\n**[W3] Figure 2 can be more informative to give details on the construction of the keyframes.**\n\nThank you for the suggestion. We add Figure 17 to illustrate how the latent keyframes are constructed, and mention this figure in the caption of Figure 2.\n\n---\n\n**[W4] Did the authors try different design choices of keyframes?**\n\nWe tried several different operations to compute the latent features from a given coordinate, and the latent grids (e.g., bicubic, nearest operations), but the current choice worked best. Still, there are many potential directions of developing keyframes one can explore, e.g., considering latent keyframes across axes other than spatio-temporal axes may increase the expressive power of the overall framework and would become an interesting direction. \n\n---\n\n**[W5] Improvement from modulation seems marginal .** \n\n\n\n\nWe agree the ablation study on a single Jockey video (in the initial manuscript) may mislead you to think the improvement is marginal. However, modulation indeed provides considerable improvement; on average of all videos in the UVG benchmark, modulation improves the PSNR metric (higher is better) from 38.04$\\rightarrow$38.89 (we update this in the revision). In addition, when used alone, modulation dramatically improves the compute-efficiency: to validate this, we provide an additional ablation study on modulation without sparse positional features on Beauty video in UVG-HD. As shown in the below table, our method reaches 30.68 (measured with PSNR metric; higher is better) with the modulation, which is >15.01$\\times$ compute-efficient than the model without modulation.\n\n\\begin{array}{l c c c}\n\\hline\n\\text{Method} & \\text{\\\\# Params.} & \\text{PSNR } (\\uparrow) & \\text{Training time} \\newline\n\\hline\n\\text{Latent keyframes} & \\text{138M} & 30.66 &1276\\text{s} \\newline\n\\text{Latent keyframes + Modulation} & \\text{138M} & 30.68 & 85\\text{s}\\newline\n\\hline\n\\end{array}\n\n\n---\n\n**[W6] Adding standard deviation in ablation study (Table 3).**\n\nIn the initial manuscript, we did not report standard deviations as we conducted the ablation study with a single video in the UVG benchmark (Jockey), a video in which the temporal variation is large. As you highlighted, we performed the ablation study with all 7 videos and added the standard deviation with the average and updated Table 3 like the below table in the revision.\n\n\\begin{array}{c c c c c}\n\\hline\n\\text{Keyframes} & \\text{Sparse feat.} & \\text{Module.} & \\text{\\\\# Params.} & \\text{PSNR }(\\uparrow) \\newline\n\\hline\n\\color{red}\\tt{X} & \\color{green}\\checkmark & \\color{green}\\checkmark & \\text{136M} & 31.21\\small{\\pm2.80} \\newline\n\\color{green}\\checkmark & \\color{red}\\tt{X} & \\color{green}\\checkmark & \\text{138M} & 32.44\\small{\\pm4.48}\\newline\n\\color{green}\\checkmark & \\color{green}\\checkmark & \\color{red}\\tt{X} & \\text{147M} & 38.04\\small{\\pm2.27}\\newline\n\\hline\n\\color{green}\\checkmark & \\color{green}\\checkmark & \\color{green}\\checkmark & \\text{136M} & \\mathbf{38.89\\small{\\pm2.11}}\\newline\n\\hline\n\\end{array}",
" **[W7] More experiments on additional datasets.**\n\nThank you for your constructive comment. Following your suggestion, we provide additional experimental results on a Big Buck Bunny video (following setups in NeRV [1]) and more complex videos. In particular, we consider the following three new videos:\n- (Street) A timelapse video of a London street. People, cars, and buses are moving at different speeds as the traffic signal changes [3].\n- (City) A timelapse video of the city at night. Lots of cars are moving speedily [4].\n- (Surfing) A video of a man surfing in the ocean. Huge ocean waves are changing dramatically and fast [5].\n\nAs shown in the below table, our method also shows similar, or even better results in different, diverse datasets. We also provide the visualization of these videos on our project page and added Appendix F in the revision to include these additional experiments.\n\n\\begin{array}{c ccccc}\n\\hline\n& \\text{UVG-HD (avg.)} & \\text{Street} & \\text{City} & \\text{Surfing} & \\text{Big Buck Bunny} \\newline\n\\hline\n\\text{BPP} & 0.412 & 0.214 & 0.173 & 0.311 & 0.456 \\newline\n\\text{PSNR} & 37.71 & 38.90 & 38.45 & 43.71 & 39.88 \\newline\n\\hline\n\\end{array}\n \n---\n\n**[W8] Further discussion about possible limitations of our method.**\n\nThank you for your incisive comment. As you stated, applying our model to an extremely long video may not work well, as they include multiple dynamic scenes which are not highly related to each other and thus make it challenging to learn the common contents. Developing a better form of latent keyframes suitable for much longer videos should be one of the important future directions of our work. One of the naive (but strong) baselines can be considering multiple keyframes per each axis to learn shared contents at each \"part\" of a given video, rather than having a single keyframe to capture the common content in the whole video. \n\nMoreover, as our primary focus is not video compression, the current compression performance is sometimes below the state-of-the-art video codecs (e.g., HEVC) or learning-based compression method (e.g., [6]). However, we think investigating our work further for compression is worth it, as our approach has the potential to mitigate the limitations of learning-based compression schemes. Recall that most learning-based video compression methods learn the compression procedure from video “datasets” (e.g., [6]), while our method requires and uses only a “single test video” for representing a video as succinct parameters. Hence, while existing methods mostly suffer from the distributional shift [7], our approach does not, since it works in a “zero-shot” manner. Not limited to, compared with traditional codecs, our method can provide other intriguing properties that the existing codecs cannot achieve, e.g., video inpainting, video frame interpolation, super-resolution, etc. We think finding a better method of video compression based on our work should be definitely an interesting future direction. \n\n---\n\n**[Q1] Scalability of NVP with the large-scale videos (spatial resolution & the number of frames).**\n\nOur method, NVP, can be scaled up to more large-scale videos that consist of higher resolution and more frames. To validate the scalability of NVP, we provide an additional experiment on a video consisting of 1200 frames of 3840$\\times$2560 resolution, 4$\\times$ higher resolution, and 2$\\times$ to 4$\\times$ longer than the videos in the UVG benchmark. Measured with the PSNR metric (higher is better), NVP exceeds 37.77 with 12 hours (encoding time) and 0.130 (BPP), which is even better than the average result on the UVG benchmark (36.34 with 12 hours (encoding time) and 0.214 (BPP)). We also provide the qualitative results of this video on our project page. \n\n---\n\n[1] Chen et al., NeRV: Neural Representations for Videos, NeurIPS 2021 \n[2] Müller et al., Instant Neural Graphics Primitives with a Mulriesolution Hash Encoding, SIGGRAPH 2022 \n[3] https://pixabay.com/videos/id-28693/ \n[4] https://pixabay.com/videos/id-19627/ \n[5] https://pixabay.com/videos/id-110734/ \n[6] Lu et al., DVC: An End-to-End Deep Video Compression Framework, CVPR 2019 \n[7] Agustsson et al., Scale-space flow for end-to-end optimized video compression, CVPR 2020 ",
" We deeply appreciate your thoughtful comments and efforts in reviewing our manuscript. We mark our major revision in the revised manuscript with “blue.” We also illustrate several additional images and videos on our project page to answer your questions: https://neurips2022-nvp.github.io. We respond to each of your comments one-by-one in what follows. \n\n---\n\n**[W1] Overall originality seems not high.**\n\nAs highlighted by Reviewer kkNU, we believe that we propose various novel ideas, e.g., learnable latent keyframes across each spatio-temporal axis and the compression procedure. More importantly, we think that the originality of our method can be found not only in its detailed technical components but also in its remarkable goal. Namely, differentiated from previous CNRs, our method, NVP, is the first video CNR that achieves all three desired aspects of parameter-, compute-efficiency, and high-quality encoding simultaneously in encoding videos. Specifically, prior works have sacrificed one of them: for instance, NeRV [1] requires a very long encoding time to encode a given video, and Instant-ngp [2] requires lots of memory than the original video size to accomplish the compute-efficiency. In this respect, we think that we are tackling and solving an important problem of high originality. \n\n---\n\n**[W2] Unfair, insufficient experimental setups in some aspects.**\n\nFor a fair comparison, we use the official implementations of all of the baselines, following their exact training specification and hyperparameter setups, as stated in Appendix A. We also provide a detailed answer and more experimental results to address your concern on experimental setups; please refer to [Q1] and [Q2] below.\n\n---\n\n**[Q1] Different experimental setup (Table 1) from NeRV.** \n\nWe mainly conduct the experiments under the UVG benchmark since we believe UVG can provide a more extensive comparison than a single Big Buck Bunny video used in NeRV [1]. Specifically, UVG contains “multiple” videos, where each video has different characteristics (e.g,, motion and texture) [3] consisting of frames of higher resolution and a longer length than the Big Buck Bunny video. For example, the resolution of videos in UVG datasets are 1920$\\times$1080 (2.6$\\times$ larger than Big Buck Bunny) and mostly have a length of 600 ($\\times$4.5 times longer than Big Buck Bunny). Moreover, in contrast to NeRV, which only reports the encoding quality at the convergence, we report the result at various encoding times; this is because compute-efficiency is one of our major criteria to evaluate CNRs, not only their final performance and parameter-efficiency. \n\nNevertheless, we also agree that providing a result on Big Buck Bunny can present a more intuitive comparison to NeRV and make the experiment more comprehensive. Thus, we present the comparison result of NeRV and our method on the Big Buck Bunny video (we added this in Appendix F) in the table below, which validates the superiority (or at least comparable) of our method in the setup of NeRV.\n\n\\begin{array}{l ccc}\n\\hline\n\\text{Method} & \\text{BPP} & \\text{PSNR } (\\uparrow) & \\text{Encoding time } (\\text{hr}, \\downarrow) \\newline\n\\hline\n\\text{NeRV-S} & 0.128 & 32.36 & 1.734\\newline\n\\text{NVP-S (ours)} & 0.136 & 32.56 & 0.925 \\newline\n\\hline\n\\text{NeRV-M} & 0.249 & 36.50 & 1.762 \\newline\n\\text{NVP-M (ours)} & 0.248 & 36.49 & 0.925\\newline\n\\hline\n\\text{NeRV-L} & 0.496 & 39.26 & 1.774 \\newline\n\\text{NVP-L (ours)} & 0.456 & 39.88 & 0.925\\newline\n\\hline\n\\end{array}\n\n---\n\n**[Q2] Missing the results of instant-ngp in Table 1 after training >15 hours.**\n\nOur main focus of >15 hours encoding time in Table 1 is to compare the parameter efficiency of different methods, while Instant-ngp [2] is proposed to sacrifice its parameter-efficiency for achieving high-quality encodings. Thus, we do not consider Instant-ngp in this situation. Nonetheless, to further alleviate your concern, we provide the results of Instant-ngp after training >15 hours, where the model size is adjusted similarly to other baselines. As shown in the below table, our method shows better results even compared with Instant-ngp in these setups. We also added this result in Table 1 and clarified the intention of each row.\n\n\\begin{array}{clccc}\n\\hline\n\\text{Encoding time}&\\text{Method}&\\text{BPP}&\\text{PSNR } (\\uparrow)&\\text{FLIP }(\\downarrow) & \\text{LPIPS }(\\downarrow) \\newline\n\\hline\n\\text{>15 hours} & \\text{Instant-ngp} & 0.229 & {28.81\\small{\\pm3.48}} & {0.155\\small{\\pm0.057}} & {0.390\\small{\\pm{0.135}}} \\newline\n\\sim\\text{12 hours}& \\text{NVP-S (ours)}&0.214 & \\mathbf{36.34\\small{\\pm2.19}} & \\mathbf{0.067\\small{\\pm0.017}} & \\mathbf{0.128\\small{\\pm{0.073}}}\\newline\n\\hline\n\\text{>40 hours}&\\text{Instant-ngp}&0.436&29.98\\small{\\pm3.39}&0.138\\small{\\pm0.051}&{0.358\\small{\\pm{0.140}}}\\newline\n\\sim\\text{17 hours}&\\text{NVP-L (ours)}&0.412&\\mathbf{37.71\\small{\\pm1.80}}&\\mathbf{0.061\\small{\\pm0.013}}& \\mathbf{0.110\\small{\\pm{0.085}}}\\newline\n\\hline\n\\end{array}",
" **[Q3/6] More comprehensive compression results (video-wise result, traditional codecs as baselines).** \n\nWe remind you that our primary goal is to design a better coordinate-based neural representation (CNR) for videos that is highly parameter-/compute-efficient and provides high-quality encoding, not to develop a video compression method. Video compression is just one of the possible use cases of CNRs, such as video inpainting, video frame interpolation, and super-resolution. Hence, we did not perform an extensive evaluation of the compression performance that considers lots of state-of-the-art video codecs as baselines. However, we want to note that we already have included HEVC (that you mentioned) and H.264, which are current advanced video codecs, to validate the potential of our approach for compression (see Figure 6 in the manuscript). Moreover, we also agree the more detailed video-wise compression results would further strengthen our manuscript; following your suggestion, we added the video-wise compression results on all the videos in the UVG benchmark in Appendix G.\n\nAlthough our main focus is not on video compression, we think investigating our work further for compression is worth it, as our approach has the potential to mitigate the limitations of learning-based compression schemes. Recall that most learning-based video compression methods learn the compression procedure from video “datasets” (e.g., [4]), while our method requires and uses only a “single test video” for representing a video as succinct parameters. Hence, while existing methods mostly suffer from the distributional shift [5], our approach does not, since it works in a “zero-shot” manner. Not limited to, compared with traditional codecs, our method can provide other intriguing properties that the existing codecs cannot achieve, e.g., video inpainting, video frame interpolation, super-resolution, etc. We think finding a better method of video compression based on our work should be definitely an interesting future direction. \n",
" **[Q4] Lack of comparison of decoding time.**\n \nIn our original submission, we indeed included the comparison of decoding time in Appendix D (we revised the manuscript to point out the comparison in L244): for instance, measured with a single NVIDIA V100 32GB GPU, NeRV-L decodes 15.28 frames per second (FPS), while our current Pytorch implementation of NVP-L (our method) decodes 5.71 FPS. Yet, we strongly believe that such a gap can be overcome via a C++/CUDA-based implementation of our method. Specifically, Instant-ngp [2] exhibits the decoding speed of CNRs can be remarkably boosted (40.10 FPS measured with the above setup) via C++/CUDA-based parallelism implementation if the architecture follows a combination of latent grids and a simple multi-layer perceptron (MLP). Recall that NVP is also composed of latent grids and MLP; we strongly believe the decoding speed of NVP can be comparable to NeRV [1] if one follows the implementation details in Instant-ngp. We also note that such parallelism is not straightforward for NeRV, which uses a convolutional neural network for its architecture. Furthermore, in contrast to NeRV, NVP synthesizes _each pixel value independently_ which does not require decoding of the entire frame at once, providing better scalability on training and inference at extremely high-resolution videos (e.g., 8K videos) under limited memory constraints. We will provide the C++/CUDA-based implementation of NVP in the final version of our manuscript. \n\n---\n\n**[Q5] Explanation about the choice of grid size.**\n\nTo answer your question, we provide the result of the ReadySetGo video measured with the PSNR metric (higher is better) under the different configurations $(H, W, S)$ of the sparse positional features. Here, we set $H \\times W \\times S$ similarly for a fair comparison among different hyperparameter setups. As shown in the below table, our method is quite robust to the selection of $(H,W,S)$. Moreover, one can observe that letting large $S$ is more beneficial: this is because (a) we select $3 \\times 3 \\times 1$ latent codes from sparse positional features and (b) the spatial keyframe often becomes more informative by capturing the common content across the temporal axis (e.g., background) and can dramatically reduce the value of $H$ and $W$. \n\n\\begin{array}{l c}\n\\hline\n(H, W, S) & \\text{PSNR }(\\uparrow) \\newline\n\\hline\n(300, 300, 600) & 36.72 & \\newline\n(450, 450, 300) & 35.87 & \\newline\n(225, 400, 600) & 36.64 & \\newline\n\\hline\n\\end{array}\n\n---\n\n[1] Chen et al., NeRV: Neural Representations for Videos, NeurIPS 2021 \n[2] Müller et al., Instant Neural Graphics Primitives with a Multiresolution Hash Encoding, SIGGRAPH 2022 \n[3] UVG Dataset: 50/120fps 4K Sequences for Video Codec Analysis and Development, MMSys 2020 \n[4] Lu et al., DVC: An End-to-End Deep Video Compression Framework, CVPR 2019 \n[5] Agustsson et al., Scale-space flow for end-to-end optimized video compression, CVPR 2020 \n",
" The authors proposed a scalable implicit neural network for reconstructing video signals in a parameter-efficient and computationally efficient manner. In order to do this, they proposed learnable positional features inspired by the existing work in the literature (Instant-ngp). In general, implicit neural network maps from input co-ordinates to RGB pixels directly using one mapping function, here the authors maps the pixel co-ordinates to intermediate latent space, and from latent space to RGB pixels using another mapping function with modulation function. In the latent space, the authors learn the key frames of the given video in the latent grids, for xy, xt, yt, xyt. These are compressed using existing standard codecs HEVC and JPEG, and interpolation is used to find the latent vectors for any given (x,y,t), and then latent to RGB mapping function. The authors show their proposed method trains faster, and reconstructs the video better in a parameter efficient way. Strengths:\nThe paper proposes the parameter and computational efficient implicit neural network to reconstruct videos from pixel coordinates. The method has the advantage of learning keyframes automatically from the videos, whereas the other compression methods do not have the capability to learn the keyframes.\nLearns the grid representations in the latent space for different directions (xy, xt, yt, xyt). The authors claim that it improves parameter efficiency of Neural video representation (NVR)\n\nWeakness:\nFor the compression of the latent spatial grids, the HEVC and JPEG compression methods are used. For the video compression applications, the combination of traditional codecs with learning-based compression is not interesting in my opinion. The traditional codecs are bound to have artifacts that the proposed method incorporates into the proposed method \nThe performance of the proposed method are below the HEVC and H264, and the datasets used to evaluate the proposed method is not a good benchmark. I see that in the honeybee video, the video is almost static and it is difficult to conclude whether the proposed method can capture the dynamic motions in the temporal dimensions.\nThe proposed method is not compared with other state-of-the-art methods in learning-based video compression methods.\n 1) In learning latent keyframes, what kind of information do the keyframes contains, and in terms of visualization how does it look like?. does it learn image like structure?\n2) how the BPP is calculated? whether the weights of the network are used in full precision or half precision to compute the BPP. In the total filesize, how much information is from latent grids, and how much information is from the latent to RGB mapping function.\n3) I visualized a few videos in the dataset used in the paper, the temporal motion in the videos is not much dynamic, it's almost static. Maybe this is the reason, the method is able to have good reconstruction results. I suspect whether the proposed might have a similar performance with high temporal variations.\n the limitations are discussed in the paper",
" The paper proposes a coordinate-based architecture for video representation. The key element of the architecture is the introduction of keyframes (positional encodings of the features). The approach is validated on the UVG benchmark showing benefits over prior art. The paper is in general well-structured and well written. The model design choices are motivated, and the reported results looks solid. The idea is moderately original and the quality of the work is good enough to recommend the acceptance. However, he clarity of the presentation could be improved and the authors could highlight better the significance of the work. See detailed comments and questions below.\n\n**Abstract:**\n- Abstract could be strengthened to reflect better the paper contribution and its importance.\n\n**Introduction:**\n- The introductory section positions well the work w.r.t prior art.\n- Introduction does not motivate why research on CNRs for video representation is an important research avenue. Adding such motivation would make the paper stronger.\n- The connection between learnable positional features and keyframes is now well exposed in the intro and in the abstract. It is hard to understand why keyframes could play the role of learnable positional features.\n- Fig 2 could be a bit more informative; the key element of the paper seems to be the construction of the keyframes however, the figure does not give details on how the keyframes are constructed.\n- L24 “… numerous appealing properties …“- only one property is mentioned.\n- L36 “… while enjoying lots of intriguing properties …” – could the authors enumerate the properties\n- L71 “… 34.43 in 5 minutes …” – can the authors add details here? What is the hardware? How is the time measured?\n\n\n\n**Methodology:**\n- In general methodology section is well written and easy to follow. One exception is the construction of the keyframes. This part could benefit from some re-writing. Also, adding a figure explaining the process could ease the understanding. Moreover, it is unclear why U_l is represented as a set.\n- Did the authors consider different types of or designs of keyframes?\n- Modulation, in L174 the authors write: “… we found such a simple MLP architecture lacks an expressive power and fails to capture the complex dynamics… “. However, based on ablation study the benefit of modulation seems marginal (in terms of PSNR). Could the authors comment on this?\n\n**Experiments:**\n- The ablation table is missing std, could the std information be added to the ablation table?\n- Adding additional dataset would make the validation and observations stronger.\n How does the method scale with video size in terms of both the spatial resolution of the frame as well as the number of frames? The limitations are discussed but are centered around the general idea of CNR for video representation. The discussion of the possible limitation of the model design could be extended. For example, one might expect that the keyframe idea (amortizing over the video) would not work well for long videos composed of multiple possibly dynamic scenes.\n\n",
" This paper proposes a new CNR framework by introducing “learnable positional features” and decomposing the video into 2D and 3D latent representations. Besides, a compute- and memory-efficient compression procedure is introduced to further reduce the parameters and training cost. Experiments are conducted on UVG-HD benchmark for evaluating video encodings. Strengths: \nThis work proposes a new CNR framework to solve the dilemma of quality and efficiency in video encoding. The experimental results have been improved on both PSNR metrics and parameter-efficiency. The overall clarity of the paper is good.\n\nWeakness: \nThis work is mainly built based on the existing CNR framework, but the overall originality is not high. The experimental part is not sufficient, and some experimental settings are also unfair to some extent. Here are some concerns to be addressed:\n\n1. My major concern about this work is the experiment results shown in Table. 1, in which the experimental setting is different from NeRV. Maybe the results on “Big Buck Bunny” sequence should be given to present a more intuitive comparison.\n2. Also in Table. 1, why is instant-NGP not compared after encoding time > 15hours?\n3. The results of video compression evaluation should be more detailed. Figure. 4 only shows the results on ShakeNDry and Beauty. More comprehensive experimental results should be presented.\n4. Lack of comparison of decoding time.\n5. Why is the grid size selected as H×W×S? It is better to show the influence of different grid sizes.\n6. More experiments comparing with traditional coding frameworks such as VVC and HEVC are suggested.\n Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"WFcfavFTmUH",
"jAhoqJ3fvR3",
"AssIQQ89b_G",
"L4RGdu7PK1v",
"jWgxQGbbMK",
"xRB1tU4RzXI",
"Xc81zRU3pKT",
"jAhoqJ3fvR3",
"nips_2022_OxfI-3i5M8g",
"nips_2022_OxfI-3i5M8g",
"DAFO2_y9QS",
"DAFO2_y9QS",
"sNf6xpmXSPy",
"sNf6xpmXSPy",
"dqgAN-6_b_",
"dqgAN-6_b_",
"dqgAN-6_b_",
"nips_2022_OxfI-3i5M8g",
"nips_2022_OxfI-3i5M8g",
"nips_2022_OxfI-3i5M8g"
] |
nips_2022_UPZCt9perOn | Metric-Projected Accelerated Riemannian Optimization: Handling Constraints to Bound Geometric Penalties | We propose an accelerated first-order method for the optimization of smooth and (strongly or not) geodesically-convex functions over a compact and geodesically-convex set in Hadamard manifolds, that we access to via a metric-projection oracle. It enjoys the same rates of convergence as Nesterov's accelerated gradient descent, up to a multiplicative geometric penalty and log factors. Even without in-manifold constraints, all prior fully accelerated works require their iterates to remain in some specified compact set (which is needed in worse-case analyses due to a lower bound), while only two previous methods are able to enforce this condition and these, in contrast, have limited applicability like to local optimization or to spaces of constant curvature. Our results solve an open question in (Kim and Yang, 2022) and an another question related to one posed in (Zhang and Sra, 2016). In our solution, we show we can use projected Riemannian gradient descent to implement an inexact proximal point operator that we use as a subroutine, which is of independent interest.
| Reject | The paper deals with accelerated methods on Riemannian manifolds. A particular challenge that the paper tries to address, which the AC believes is important, is related to the bounding of the iterates. The paper starts with an explicit bounding constraint on the manifold (and relaxes to the ball constraint for certain manifolds) and shows that the proposed algorithm can respect that while achieving acceleration. The reviewers including this AC see the merits of the paper. However, the paper in its current form is far from complete. A particular concern is on the empirical performance of the algorithm, resolving which should strengthen the paper. I would encourage the authors to build on the discussions and polish the paper accordingly.
Even though the paper has positive scores, the paper in its current form is a borderline paper with significant scope for improvement. To this end, the AC cannot accept the paper. | test | [
"OpWEagbPtn1",
"U8CCrJb8pH",
"DAc2IU8pUJ6",
"6lmR8RIwmq",
"6fEkijqVXLz",
"SLmydIBk_wa",
"WfW0aTSeJQM",
"LsL9guTP5E2",
"H4NNasFDUmk",
"Ohfu_ZO6OAE",
"PWJn03a-2nS",
"hphTALmqLSk",
"VXtpvx1vKG9U",
"wqlFQYdwCmV",
"APk1U6e1Nwb",
"DNcVRKHtra7"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" To all reviewers, please note we updated the supplementary material to include a new section in the Appendix (section C) including the new subroutines and their proofs of linear convergence",
" Could you clarify why after the response your impression of the paper is the same? The main concerns, namely the implementation of the metric projection oracle, how the method compares favorably to other methods if one takes into account the projection oracle, and the practicality of the method were all addressed: \n\nFor all the previously known problems there are no in-manifold constraints and since we need to impose at least some constraint to bound geometric penalties we can decide to impose a Riemannian ball as constraints, whose projection oracle is given in closed form. Thus, the method only requires a gradient oracle and the standard geometric operations of the exponential map, inverse exponential map and parallel transport.\n\nIf there are any other concerns please could you clarify so we can comment on them? Thank you.\n",
" We apologize if the reviewer felt that the discussion was being too high-level, but we are happy this issue was raised so we actually have a two-way discussion and we can clarify!\n\nPlease read below for a clarification on all these issues.\n\nRegarding properties of previous methods, we were always very explicit on what methods assumed the iterates to remain in a pre-specified compact set: all of them except [Mar22] and [CB21], which are limited to constant curvature and local optimization, respectively, as discussed in the paper. We cited all of the papers and provided an explanatory table indicating which methods can and which cannot deal with constraints (column \"C?\") Our method\n\nRegarding the \"cheap\" implementation of the projection oracle (its the only case in which we have referred to anything as cheap), we believe we were very explicit on what the operation is, but we apologize if the explanation was not clear and will repeat the argument here: in the post addressed to all reviewers, we proved that the metric projection if the feasible set is a Riemannian ball has a closed form! That is, there is no need to implement the projection oracle with a possibly expensive optimization subalgorithm. This projection takes the form $P_X(x) = Exp_{x_0}(R \\cdot Log_{x_0}(x)/|Log_{x_0}(x)|)$ if $x\\not\\in X$ and $P_X(x) = x$ otherwise. This closed form uses just one exponential map $Exp_{x_0}$ and one inverse exponential map $Log_{x_0}$, which are basic building blocks (see Algorithm 1 for more uses of these maps).\n\n\nRegarding the \"most problems fitting into this framework [i.e. Riemannian optimization with no in-manifold constraints]\" we note that all the problems that were cited that apply to our method (i.e. Hadamard manifolds and geodesically convex or strongly geodesically convex functions) have no constraints (except for the manifold, which can be considered a constrained that has been lifted by optimizing over it as the domain. Having no other constraints is what we refer to as no in-manifold constraints). These are: \n+ Dictionary learning [CS17; SQW17]\n+ Robust covariance estimation in Gaussian distributions [Wie12]\n+ Gaussian Mixture Models [HS15].\n+ Operator scaling [All+18].\n+ One can see more problems in the survey that we cited [HS20, Section 6 \"Example applications\"] (Karcher mean, Wasserstein Barycenters).\n\nThus, for all the problems above, we can decide on the constrain to use to keep the iterates of the algorithm bounded and to bound geometric penalties. And hence we can use a Riemannian ball, for which we now have a closed form solution for its metric-projection oracle . As we said in our response to reviewer qP9H, our algorithm does not apply to optimization under orthogonality constraints and sparse PCA since those cases are defined over manifolds that are not Hadamard and extending our results to allow for positive curvature is an interesting direction of future research. \n\nBecause of these applications and because of its level of generality, obtaining a Riemannian version of accelerated gradient descent to Riemannian manifolds is a problem that has received a lot of attention in the last few years [Liu+17, ZS18, Ali+19, HW19a, Ali+20, Lin+20, AS20, CB21, Mar22, KY22] plus papers studying lower bounds [HM21, CB21]. We managed to generalized the Euclidean Accelerated Gradient Descent algorithm to a practical algorithm without working with unreasonable assumptions (like assuming that the iterates will stay within some pre-specified set without imposing any projection mechanism) and improved on several other aspects. We believe this is a significant contribution to the topic.\n\nAlso, for completeness, we updated our submission of the supplementary material and have added Appendix C, that contains the two new subroutines and their analyses.\n\nWe hope to have been completely clear about everything that the reviewer suggested could seem it was not \"exact\" or too \"high-level\" and if that is the case we encourage to reconsider their evaluation of our work.\n",
" Thank you for further elaborations. I have read other reviews and discussions as well. My main concern is still that the provided discussions are mostly high-level, with insufficient 'exact' references/arguments for claims (most methods having some property, most problems fitting into this framework, cheaply [which is vague if no specific application is under consideration], etc). I will keep my previous score.",
" Dear Reviewer 9V2i,\n\nWe hope to have convinced you that the assumptions on the function and feasible set are not strong and that they are the right thing to use, which is the reason why they are startanrd. We hope to have been clear with respect to our extra result regarding the projection oracle in the Riemannian unconstrained case (the most important case), now we can project cheaply, resulting in a practical algorithm that only requires a gradient oracle and standard geometric quantities. Moreover, we provide new subroutines for unaccelerated linear convergence of constrainted strongly g-convex problems, given that we found that the one in the COLT paper of Zhang and Sra was wrong.\n\nWe did not hear from you yet, but we would be delighted to learn whether our responses answered you questions. Thanks.\n",
" Dear reviewer HxJc\n\nPlease can you let us know what you think? We believe our work improves greatly over the previous algorithms attempting to achieve Riemannian acceleration and we make other important technical contributions (such us accelerated Riemannian proximal point method with exact prox, not needing a point with 0 gradient to be inside of the set, and a generic accelerating reduction) \n\nWe believe we replied to all of your concerns and we hope to having giving you a more comprehensive view of our work. Do our responses indeed address all the concerns? If so, please let us know! If not, please comment so we can clarify further.",
" Thanks to the authors for their comments. After reading your response, I still think that this is a borderline paper (leaning towards acceptance); hence, I will keep my score.",
" + For **any** $\\zeta$, assuming access to an algorithm that minimizes the upper bound that smoothness yields when computing $\\nabla f(x_t)$ i.e. $\\arg\\ \\min_{x\\in X}\\{\\langle \\nabla f(x_t), \\operatorname{Log}_{x_t}(x) \\rangle + \\frac{L}{2}d(x_t, x)^2 \\}$, then we have an analysis showing linear rates of convergence (and actually an approximate solution of such subproblem also works). This subproblem, in the Euclidean case, is equivalent to the projection operator. However, in the Riemannian case, this and the metric-projection operator are two different things. \n + Solving the subproblem does not require calling the gradient oracle and therefore we show that the gradient oracle complexity of optimizing smooth and (possibly strongly) g-convex, for any $\\zeta$ is as it is displayed on the table, without making any assumptions about \"the iterates of the algorithm staying in a pre-specified compact set, without enforcing this condition\". This is useful, among other things, because it shows that it is not possible to prove a lower bound on the oracle complexity that has a hardness of the geometry factor worse than our $\\zeta^2$ (the only global constrained solution that existed, [Mar21], had an exponential penalty on $\\zeta$ and it was not clear before this paper if one could have lower oracle complexity, due to the exponentially-growing volume of balls in spaces of negative curvature).\n\nWe can write the analysis of these two subroutines here upon request of the reviewers, and in any case we will add them to the final version of our paper with all level of detail and we will modify the exposition of the paper accordingly (i.e., modifying the remark about the implemented subroutine, adding the new 2 different subroutines, and making clear in the text what the extent of the results is).\n\n**Conclusion**\n\nIn conclusion, all of our main results still hold, except for the lower level of generality in the first point and we add a new result regarding the metric projection, that makes the method be practical:\n\n+ We still solve the open question of Kim and Yang 2022, now published on ICML 2022, asking about whether acceleration can be obtained without making the assumption \"the iterates of the algorithm stay in a pre-specified compact set without enforcing them to be in it\". **Without making any of those assumptions:**\n + For $\\zeta <$ a universal constant, we show that projected Riemannian Gradient Descent enjoys linear rates and thus it can be used as a subroutine to yield our accelerated algorithm.\n + For any $\\zeta$, we show the upper bound in oracle complexity that enjoys the same rates as Euclidean Nesterov's accelerated gradient descent up to the geometric constant factor $\\zeta^2$.\n\n+ We provide a generic and cheap to compute metric-projected oracle when the constraint is a Riemannian ball. We argue that most Riemannian problems have no in-manifold constraints and therefore we can choose the constraint we like in order to bound geometric penalties and we can use a Riemannian ball. This makes the method only require a gradient oracle. Thus, it is efficient, and works under practical assumptions\n\n+ If we can compute the exact Riemannian proximal point operator and we use it as the implicit gradient descent step in Line 8 of Algorithm 1, then the method is an accelerated proximal point method. One such Riemannian algorithm was previously unknown in the literature as well.\n\n+ We provide the generic accelerating reduction. If a new constrained algorithm with linear rates is designed for any setting (e.g. linear convergence for constrained strongly g-convex finite-sum problems) then our framework provides an accelerated algorithm. (in the example, it would be the first accelerated Riemannian algorithm for finite-sum problems). Also if another subroutine for strongly g-convex and smooth problems is designed, besides from our two subroutines (for example, one for a generic $\\zeta$) then our reduction yields an accelerated algorithm under the same setting.\n\n+ Other technical details and improvements: smoothness and g-convexity is only required inside of X, we optimize with respect to a global minimizer in X without assuming this is a point with 0 gradient. \n",
" TL;DR: \n\n1. We now have a simple cheap-to-compute projection operator for a Riemannian ball constraint. In order to bound geometric penalties for problems with no in-manifold constraints (and most problems we are aware of are of this kind) we can decide to impose a ball as our constraint, so this covers most problems.\n\n2. We very recently found that the analysis of the projected Riemannian Gradient Descent algorithm made by Zhang and Sra in their COLT 2016 paper is wrong and so we cannot use it as subroutine. Our method admits any linearly convergent algorithm for strongly g-convex smooth problems as subroutine and we designed 2 new subroutines that work under different assumptions which still allow to obtain acceleration and improve over previous works at the expense of some generality. We will add this to the final version of the paper. See below for a full discussion.\n\n**Detailed explanation:**\n\n**1. Ball constraint**\n\n+ If the Riemannian problem is unconstrained (has no in-manifold constraints) we still need **some** constraint to bound geometric penalties, but **any** constraint works and we can decide to use a Riemannian ball with center at $x_0$ as constraint. Usually Riemannian optimization turns constrained problems into unconstrained ones whose domain is the manifold. That is, for most Riemannian problems (like the ones we cited) the constraints are codified by the manifold and there are no other in-manifold constraints, so a metric-projection oracle for a ball would cover most problems in Riemannian optimization.\n+ We have a new, simple, but powerful result that allows us to build a very cheap projection oracle when the constraint is a ball: The metric projection of a point $x$ outside of the Riemannian ball is the point of the border of the ball that is in the geodesic that joins $x$ and $x_0$. That is, if we call $R$ the radius of the ball, we have that $P_X(x) = Exp_{x_0}(R\\cdot Log_{x_0}(x)/\\|Log_{x_0}(x)\\|)$ if $x \\not\\in X$ and $P_X(x) = x$ otherwise. The proof is simple and as follows. We work with uniquely geodesic feasible sets (and a Riemannian ball in a Hadamard manifold is uniquely geodesic) and so a curve between two points is a geodesic if and only if it is globally distance-minimizing. By definition, the projection point $P_X(x)$ satisfies $d(x, P_X(x)) \\leq d(x, y)$ for all $y \\in X$, so if $x$ is outside of the ball then $P_X(x)$ is on the border of the ball and $d(x_0, P_X(x))=R$ and so the two geodesic segments from $x$ to $P_X(x)$ and from $P_X(x)$ to $x_0$ form a curve that is globally distance minimizing and thus this curve is the geodesic joining $x$ and $x_0$. Q.E.D.\n\n**2. New subroutines**\n\nThe main point of our paper is to show that our Algorithm 1 and its analysis provide a generic framework for obtaining acceleration in Riemannian manifolds without making the undesirable assumption that is that the iterates remain in some pre-specified compact set. This is a generic reduction that, from a linearly convergent **unaccelerated** constrained algorithm for strongly g-convex problems, returns **accelerated** algorithms for smooth and g-convex or strongly g-convex problems.\n\nWe very recently, for the paper by Zhang and Sra \"First-order Methods for Geodesically Convex Optimization\" published in COLT 2016 and that contains a metric-projected Riemannian gradient descent and a \"proof\" of constrained linear convergence for strongly g-convex smooth problems, found out a mistake in the proof of convergence, when the problem is constrained. So we cannot use it in our Remark 2.3 to provide the example showing the reduction being used. However, our reduction is general enough that any other linearly convergent algorithm can be used as a subroutine. \n\nThere was no other such algorithm in the literature, but we have a proof of the linear convergence of two constrained Riemannian Gradient Descent algorithms for the strongly g-convex smooth setting:\n\n+ For $\\zeta < $ a function-independent universal constant and a metric-projection oracle: we have a very different analysis that shows that the same algorithm: metric-projected Riemannian Gradient Descent, enjoys unaccelerated linear convergence rates and therefore it can be used in our algorithm as a subroutine and we obtain our claimed final convergence.\n",
" Thank you for your time and your review. Please check as well the announcement we wrote regarding an error in previous work and our new subroutines that we design in substitution. Also, we are happy to announce that we have a new result regarding a cheap metric-projected oracle that works in the case where most applications lie (no further in-manifold constraints so we can pick a ball constraint to bound geometric penalties).\n\n> It is proven in this paper that this Riemannian algorithm enjoys the same convergence rate as the Euclidean AGD algorithm **without convex constraints**.\n\nWe would like to note that we compare our algorithm with constrained Euclidean accelerated gradient descent (see the clarification of footnote 1 in page 4), which supports a generic projection oracle and calls it once per gradient oracle call, as we do.\n\n\n> However, numerically, this algorithm is hard to use. \n\nSee the metric-projection oracle we describe on the announcement to all reviewers for a very important case in which the projection oracle can be readily computed, which is a new result we have. In general, the particular constrained problem at hand requires the user to implement the projection oracle, **as it is the case with accelerated gradient descent in the Euclidean case**. But virtually all Riemannian optimization problems we know have no initial in-manifold constraints, and in that case our ball constraints are enough to bound geometric penalties and we can now implement the projection oracle easily.\n\n\n> The requirements of a geodesically convex function over a geodesically-convex set on Hadamard manifolds are quite strong and already limit the usage for many important applications. The author gives two manifolds in the paper, one is a hyperbolic manifold and the other one is the manifold of SPD manifold. Giving a few applications would be helpful.\n\nWe argue that the requirements for the function and the set are not strong. \n\nOn the one hand, Riemannian optimization is of great interest because many problems have been found to be geodesically convex, and it has recently been intensively researched because of this fact. Operator scaling, Gaussian mixture models... and many others are geodesically convex problems. See [HS20] for more examples, as we cited in line 53. We will emphasize this. On the other hand, in Euclidean optimization, the deep understanding of convex optimization has been essential in order to design efficient (or even optimal) algorithms for some tasks in non-convex optimization, such as approximating critical points or approximate local minima. It is often the case that convex subroutines are used in these algorithms. We expect a similar thing to hold for Riemannian optimization which further motivates the study of problems that are geodesically convex.\n\nThe geodesic convexity of the feasible set is not strong either. For instance, there is a natural set that is geodesically convex and uniquely geodesic: a ball $\\{x | d(x, x_0) < R \\}$ for a center $x_0$, which is enough for most applications. But beyond this fact, our analysis is general enough to support any geodesically convex constraints for which one can implement a metric projection oracle. We will make these insights more explicit in the final version of the paper. \n\n> The metric projection oracle seems not easily available. Giving a few examples and potentials in applications would be useful.\n\nWe have a new result regarding this, see the announcement to all reviewers, whose content we will add to the final version.\n\n\n> The algorithm needs the value of \\zeta_{2D}. How do users get this value?\n\nGiven the feasible set one can compute its diameter D. The manifolds these kinds of methods are applied to are well studied and bounds on the sectional curvature are known. Given these quantities, one can easily compute $\\zeta_{2D}$ by using its definition at the end of page 3: $2D\\sqrt{|\\kappa_{\\min}|} \\coth(2D\\sqrt{|\\kappa_{\\min}|})$.\n\n> I think the authors need to comment on the implementation and numerical performance of this proposed algorithm.\n\nWe will comment on this and on the new easy projection oracle. Reviewer qP9H also asked about the implementation and its practicality and we can add a remark with the content of that reply, with the three computational requirements (A),(B),(C) we listed to Reviewer qP9H and their implementability.\n\nWe hope to have clarified your questions and we would be happy to further answer any other questions that may arise.\n",
" Thank you for your review. Please see as well the post we wrote addressing all reviewers regarding an error in previous work and our new subroutines and our new metric-projection oracle.\n\n> My main concern is the metric projection oracle and the resulting comparison results with existing methods\n\nThe analysis provides bounds on the number of queries to gradient and metric projection oracles. This is exactly the same as what one gets in accelerated constrained Euclidean convex optimization. As with Euclidean optimization, some projection oracles can be more expensive and some are cheaper. That being said, now we have a way to implement very cheaply a metric-projection oracle when the constraints consist of a ball, and this case encompasses most problems, as most problems do not have in-manifold constraints and even though we need to impose some constraints in order to bound geometric penalties, we can decide to impose the constraint we want, and it can be a ball constraint.\n\n> If you factor in this cost, does the method still compare favorably to prior methods? Please clarify, if I am misunderstanding.\n\nThe method does compare favorably to prior methods regardless of the cost of the metric projection oracle. The main reason is that most current methods make an undesirable assumption which is not known how to satisfy or it is not even clear if it can be satisfied (that the iterates will stay in some pre-specified bounded set without any mechanism to enforce this), so they do not provide a full implementable algorithm for which one can give guarantees. The only two methods that allow to enforce constraints are in [Marr22] and [CB21] (see table) and they work with a ball constraint and different ball projections than what we use but equally cheap. But these two methods only apply to limited settings: the former to constant curvature and the latter to local optimization and only in strongly g-convex problems. Besides from the possibility of implementing a cheap ball projection oracle, the theory of our method allows anyone to have an accelerated method if they have more complex constraints and they implement a projection oracle (exactly as in Euclidean constrained convex optimization). Again, we emphasize that the ball constraint encompasses virtually all Riemannian problems we are aware of, as unconstrained Riemannian accelerated optimization requires constraints as well in order to bound geometric penalties and we can pick a simple one (a ball) in order to do so.\n\n> How practical is the proposed method? Can Alg. 1 (including the subproblems) be implemented efficiently for at least some classical problems, e.g., some of the examples listed in the first paragraph of the introduction?\n\nThe algorithm requires (A) computation of geometric operations (exponential map, inverse exponential map and parallel transport) (B) gradient oracle, (C) metric projection oracle / subroutine. (A) can be computed for all the applications listed and cited in the introduction, they are basic operations that previous Riemannian optimization algorithms for these problems use as well. That being said, optimization with orthogonality constraints and sparse PCA require manifolds that are not Hadamard. The rest of the applications use Hadamard manifolds. As we said in the conclusion of the paper, extending the analysis to manifolds with positive curvature is an interesting future direction of research. (B) can be readily computed using automatic differentiation and access to the parametrization of the manifold. As for (C) as explained above for the almost universal case in which there are no in-manifold constraints we can now implement this cheaply. In the general case, efficiency of this operation depends on the constraints (as in the Euclidean case and as in analysis like projected gradient descent or constrained accelerated gradient descent).\n\n\nWe hope to have clarified all of your questions. In sum, our intention with this work was to achieve acceleration in the Riemannian setting with an algorithm that is implementable and comes with guarantees, by removing the undesirable assumption most works had (it is not realistic the iterates stay in a pre-specified set if there are no mechanisms to enforce this, so the guarantees of these algorithms are not usable) and by obtaining much more general results than the two works that did not made the assumption. We note our analysis is general enough that it can work with a general metric projection for their problems, as in constrained Euclidean convex optimization. We hope that this level of generality allows for new research whether it is on the implementation of some other projection operators or some other linearly convergent algorithms that can be used as subroutines. Additionally, our new result allows us to implement the projection oracle in a very broad case, making the algorithm efficient and practical. \n",
" Thank you for your review, your time and your many questions! Please check as well the post directed to all reviewers that we wrote. We reply to your comments here below and hope that you find them useful and they clarify the main points of our work.\n\n> The writing and the flow are excellent but only provide a very high level idea of the work and contributions of the current paper. (...) As an example a lot of the background reviewed on Lines 85-152 remains unused within the main paper.\n\nAccording to our experience, reviewers and readers usually ask us about a preliminaries section in order to understand the concepts that follow with algorithms of this kind, specially when they are in a setting that is not very standard and well known, as it is the case with Riemannian optimization. That said, we believe all concepts in lines 85-152 are necessary to read the 9 pages long main part of the paper: to understand the insights explained in high-level in section 1 and then compared with previous work, to understand the pseudocode, and to understand the technical discussion in section 2. All the other technical details are provided in the appendix.\n\n> It would be helpful to provide a remark, specifically gathering all of the \"universal constants\" and \"logarithmic factors\" (e.g., Remark A.5, Line 838, Remark 2.3, etc); and/or the source of their introduction into the analysis.\n\nWe can provide the source of the introduction of our constants and logarithmic factors for clarifying purposes in the final version of the paper.\n\n> Footnote 0: some of the links seem to be broken; for example, for \\kappa_\\min.\n\nThanks for pointing this out, we have fixed that link and some others. We have just realized we can output warnings for the links whose target we forgot to specify and have now fixed them all.\n\n> There is an impossibility argument on Lines 246-250 that I did not follow. Could you please elaborate?\n\nIt was not exactly an impossibility argument. In some problems (like for strongly g-convex problems) one can argue that reducing the function value implies the iterates get closer to the minimizer and so a monotone method has bounded iterates. Or it could be that in some setting one can prove that an algorithm with some particular learning rates will have bounded iterates. One could think in such a case that the constraints that we use to bound geometric penalties are not needed because the algorithms naturally stay bounded. But we point out this is not enough to guarantee bounded geometric penalties because given a set of diameter D where we want the iterates to remain, algorithms use the value D to set learning rates, and provide convergence rates. The existence of boundedness of the iterates could only guarantee boundedness with diameter D' > D deeming the analysis unusable, unless we change the learning rates to depend on D', in which case the boundedness of the iterates could have even greater diameter D'' and so on.\n\n\n> Some examples/references for the note on Lines 365-367 could be useful. (\"We note that any other algorithm with linear convergence rates for constrained strongly g-convex, smooth problems that works with a metric-projection oracle can be used as a subroutine to obtain an accelerated Riemannian algorithm.\")\n\nHere we were pointing out that our results consist of a general reduction and future algorithms with linear rates can be used as a subroutine for obtaining an accelerated algorithm. This is very interesting because if, for instance, someone were to show **unaccelerated** linear rates for constrained strongly g-convex finite-sum problems, then our results provide an **accelerated** algorithm for g-convex (strongly or not) for the finite-sum case. Such algorithms are not known. And similarly for other settings. We can change the sentence to make clear we are not aware of any other linearly convergence algorithms besides from the two subroutines we provide (see the announcement we made to all reviewers) and we can add the example made in this paragraph.\n\n> Could the authors elaborate on \"due to fundamental properties of their methods\" on Line 63;\n\nWe were referring to the following: their algorithms and analysis need to advance against the direction of the gradient for some length, which is incompatible with constraints or projections, and it is a fundamental part of their methods. We will change this sentence to be more precise.\n\n> elaborate on \"which is not amenable to optimization\" on Line 73-74;\n\nWe will make the sentence more clear, we were just emphasizing the hardness of the problem being non-convex.\n",
" > In relation to \"The Riemannian proximal point subroutine we design is of independent interest\", on Lines 79-80 (and in the abstract), it might be beneficial to present this subroutine in an algorithmic form instead of plain text.\n\nThe main points are the warm start which is just one step, invoking an unaccelerated linearly converging algorithm and the analysis behind it. We can add an {algorithm} environment summarizing this.\n\n> Are there any limitations of the presented algorithm or analysis that the authors feel could inform (A) a better comparison with the existing literature as well as (B) future work? Including possibilities of alternatives in certain aspects. Other examples: any further details regarding the multiplicative factors (Line 382), or any indication or difficulty in extending to other manifolds (Lines 383-384)?\n\n(A) Besides from the now new thing implied by our discovery of the analysis of (Zhang and Sra COLT 2016) being wrong and thus us needing to now use other subroutines, that we have designed (see announcement to all reviewers), which we will add to the final version and comment on, we think we have pointed out all limitations and comparisons extensively. (B) Regarding future work, there is the possibility that the $\\log(1/epsilon)$ could be removed by exploring how to adapt recent techniques on Euclidean convex optimization that can use accelerated proximal point methods with a constant number of iterations in the subroutine, like \"Contracting proximal methods for smooth convex optimization\", but it is not clear how to achieve something similar in the Riemannian case. We currently do not know how one could reduce the geometric factors, but the fact that [KY22] had $\\zeta$ instead of $\\zeta^2$, even if they work under more stringent assumptions, could mean that another algorithm could improve on that aspect as well.\n\n\n",
" The authors present \"an accelerated first-order method for the optimization of smooth and (strongly or not) geodesically-convex functions over a compact and geodesically convex set in Hadamard manifolds, that we access to via a metric-projection oracle\". For this oracle, they use \"projected Riemannian gradient descent to implement an inexact proximal point operator\". \nThe analysis aims to establish 'an accelerated rate' up to multiplicative factors induced by the manifold's sectional curvature (Line 166, Fact 1.3, Lines 149-155), as well as some log factors (Remark A.5, Line 838, Remark 2.3). An extensive list of recent work on Riemannian optimization has been provided. \n The writing and the flow are excellent but only provide a very high level idea of the work and contributions of the current paper. A considerable part of the paper, up until the middle of page 7, out of 9 pages, focuses on review of background material, review of previous works, and \"insights\", and the technical statements and proofs have been presented in the appendices. I would also leave a proper assessment of the claims to reviewers with close familiarity to those recent works cited upon which the paper claims advantages. As an example a lot of the background reviewed on Lines 85-152 remains unused within the main paper. \n It would be helpful to provide a remark, specifically gathering all of the \"universal constants\" and \"logarithmic factors\" (e.g., Remark A.5, Line 838, Remark 2.3, etc); and/or the source of their introduction into the analysis. \n\nFootnote 0: some of the links seem to be broken; for example, for \\kappa_\\min. \n\nThere is an impossibility argument on Lines 246-250 that I did not follow. Could you please elaborate?\n\nA short discussion on the gains from the 'approximate' computation (Remark 2.3), in some of the Riemannian optimization instances, could be helpful.\n\nSome examples/references for the note on Lines 365-367 could be useful. (\"We note that any other algorithm with linear convergence rates for constrained strongly g-convex, smooth problems that works with a metric-projection oracle can be used as a subroutine to obtain an accelerated Riemannian algorithm.\")\n\nCould the authors elaborate on \n- \"due to fundamental properties of their methods\" on Line 63; \n- \"which is not amenable to optimization\" on Line 73-74;\n\nIn relation to \"The Riemannian proximal point subroutine we design is of independent interest\", on Lines 79-80 (and in the abstract), it might be beneficial to present this subroutine in an algorithmic form instead of plain text. \n\n\nAre there any limitations of the presented algorithm or analysis that the authors feel could inform a better comparison with the existing literature as well as future work? Including possibilities of alternatives in certain aspects. Other examples: any further details regarding the multiplicative factors (Line 382), or any indication or difficulty in extending to other manifolds (Lines 383-384)?\n Please see previous comments on the discussion of technical contributions as well as on logarithmic factors. \n",
" The paper proposes accelerated first-order methods for optimizing smooth and geodesically convex functions on Hadamard manifolds. Acceleration on Riemannian manifolds is an important and timely topic that has recently received a lot of interest in the optimization community. The literature review is comprehensive and the background section is well-written. The theoretical results are well-motivated and presented.\n\nMy main concern is the metric projection oracle and the resulting comparison results with existing methods. If I understand correctly, the complexity results are given with respect to the number of queries to the metric projection oracle. However, the cost of one such query is not clear to me from the text (also not from Rmk. 2.3). If you factor in this cost, does the method still compare favorably to prior methods? Please clarify, if I am misunderstanding. \n\nHow practical is the proposed method? Can Alg. 1 (including the subproblems) be implemented efficiently for at least some classical problems, e.g., some of the examples listed in the first paragraph of the introduction? see above see above",
" This paper proposed an accelerated first-order optimization algorithm for smooth and\n(strongly) geodesically-convex functions over a compact and geodesically-convex set\non Hadamard manifolds. It is proven in this paper that this Riemannian algorithm\nenjoys the same convergence rate as the Euclidean AGD algorithm without convex constraints. \nThough there exist some relavent works, none of them fully generalize the Euclidean AGD \nconvergence results to the Riemannian setting. The contribution of this paper is on the theoretical aspect. It is interesting in the sense \nthat it firstly gives an algorithm that fully extends the Euclidean AGD to the Riemannian setting\nwith constraints. However, numerically, this algorithm is hard to use. It involves quite a \nfew unknown operators and parameters, and expensive operators. The requirements of a geodesically convex function over a geodesically-convex set on Hadamard manifolds\nare quite strong and already limit the usage for many important applications. The author gives two manifolds in\nthe paper, one is a hyperbolic manifold and the other one is the manifold of SPD manifold. Giving a few applications\nwould be helpful.\n\nThe metric projection oracle seems not easily available. Giving a few examples and potentials in applications\nwould be useful.\n\nThe algorithm needs the value of \\zeta_{2D}. How do users get this value? I think the authors need to comment on the implementation and numerical performance of this proposed algorithm."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"LsL9guTP5E2",
"WfW0aTSeJQM",
"6lmR8RIwmq",
"SLmydIBk_wa",
"DNcVRKHtra7",
"wqlFQYdwCmV",
"PWJn03a-2nS",
"H4NNasFDUmk",
"nips_2022_UPZCt9perOn",
"DNcVRKHtra7",
"APk1U6e1Nwb",
"wqlFQYdwCmV",
"wqlFQYdwCmV",
"nips_2022_UPZCt9perOn",
"nips_2022_UPZCt9perOn",
"nips_2022_UPZCt9perOn"
] |
nips_2022_bi1BTcXa8Q | Cross-modal Learning for Image-Guided Point Cloud Shape Completion | In this paper we explore the recent topic of point cloud completion, guided by an auxiliary image. We show how it is possible to effectively combine the information from the two modalities in a localized latent space, thus avoiding the need for complex point cloud reconstruction methods from single views used by the state-of-the-art. We also investigate a novel self-supervised setting where the auxiliary image provides a supervisory signal to the training process by using a differentiable renderer on the completed point cloud to measure fidelity in the image space. Experiments show significant improvements over state-of-the-art supervised methods for both unimodal and multimodal completion. We also show the effectiveness of the self-supervised approach which outperforms a number of supervised methods and is competitive with the latest supervised models only exploiting point cloud information. | Accept | The paper proposes a point cloud completion method that can take an auxiliary image as guidance. All the reviewers rate the paper slightly above the bar. They like the reported strong performance over the prior baseline and also the capability of using the auxiliary input. Although several reviewers raise concerns about missing experiments on real datasets such as ScanNet or KITTI, they still think the paper has sufficient merit. The AC finds no strong reason to disagree with the reviewers. | test | [
"tGYNy0IK5Pe",
"zJuaIU7F0CQ",
"cC-oO9sLMu",
"-vkIb8PYi7m",
"Z4ZDeBaRRN",
"ATc87KYJQz4",
"NkP8w3Jt_Ij",
"InZkK0-wInM",
"YyfHHX8C2sy",
"rIZHcz7ztY3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors addressed most of my concerns. I will change the final score to borderline accept.",
" The rebuttal addressed my concerns. Therefore, I change my ratings to borderline accept.",
" We thank the reviewer for the constructive feedback and for all the suggestions to improve the paper. \n\n> *The main limitation of the work is its benchmarking, which is done on ShapeNet shapes. Though mentioned by the authors are the limitation, it still makes the paper rather weak because we don't know how well it works in complicated scenes (ScanNet).*\n\nFor what concerns limitations, it would be interesting to extend our study to a real-world context, which is undoubtedly more sophisticated and articulated, but it would be a non-trivial extension due to the presence of various additional aspects discussed below. Thus, having a preliminary step in which the problem is analyzed in a controlled setting is a good and routine practice. Indeed, the vast majority of papers related to point cloud completion focus on synthetic datasets, and the state of the art for multimodal completion (ViPC) did not consider real datasets for benchmarking. Furthermore, real scenes often introduce unique issues that may necessitate ad hoc changes to the architecture and proper design decisions; for these reasons, applying the proposed framework to scenes has been left for future work. Moreover, real datasets such as ScanNet or KITTI shapes do not have ground truth complete point clouds and the evaluation is qualitative, sometimes performed with a subjective study as in Pan et al. “Variational Relational Point Completion Network”. Regarding the potential negative societal impacts of our study, we do not see a direct connection towards malicious usage of the shape completion method presented in the paper.\n\n> *The loss function is not fully ablated for example density aware Chamfer Distance.*\n\nWe omitted this ablation due to space constraints, but we will include it in the final paper. We added it in the revised supplementary material (Section 5) where we also report a qualitative visualization to show the effectiveness of DCD at improving visual quality with more uniform point distributions.\n\n> *Error bars are not shown in the experiments. I understand that the benchmark shows the best performing result, still, error bars can be shown in a separate table or in the Supplementary material.*\n\nWe provide error bars for each category in the revised supplementary material (Section 7).\n\n> *Why do experiments in Table 1 only show 8 categories whereas the original dataset consists of 13 categories?*\n\nIn order to ensure a fair comparison with published results, we followed the setting of ViPC that only uses 8 categories.\n\n> *The mixup loss only provides a tiny improvement, is it worth including it in the training? How much computational overhead this does loss contribute to the training?*\n\nThe mixup loss provides only a small improvement from the perspective of the Chamfer Distance. However it substantially improves the completed shape from a qualitative point of view, helping the network generate more complete shapes. Moreover, the computational overhead due to creating the mixed input shape is very small, so we decided to keep the method as it offered a very favorable cost-performance tradeoff.\n\n> *Explain M, N, and F in Figure 1. The figure and its captions should be self-contained. Try to have consistent notations in Eq 3,4 and the Figure 1.*\n\nN is the number of points of the input point cloud; M is the number of points generated by each branch of the decoder, while F represents the feature dimension. We will include these details in the caption in the revised paper.\n\n> *What is \\beta in the Eq. 8?*\n\n\\beta is the weight of the weighted Chamfer Distance and it is applied to penalize the distance from each point in \\hat{Y} to its nearest neighbor in Y since \\hat{Y} consists in a complete point cloud while Y is the partial point cloud used as a pseudo ground truth. This will be clarified in the final paper.\n\n> *How are hyper parameters selected? Is there is a validation set?*\n\nWe used a subset of the training set as a validation set, and we performed a small amount of cross validation for selecting the hyperparameters. However, for the final models we included the validation set in the training dataset.\n\n> *To make a more convincing argument, the Figure 3 can be modified to show an average over several shapes.*\n\nWe appreciate the suggestion and we will modify Figure 3 in the final version showing an average over several shapes of the same category.\n\nFinally, we included in the supplementary material some failure cases where the model struggles to reconstruct perfectly.\n",
" We thank the reviewer for the constructive feedback and for all the suggestions to improve the paper. We address the different highlighted weaknesses separately: \n\n> *The running time and model size comparison should be included.*\n\nWe agree that model size comparison and running time should be reported and they will be included in the final paper. For the time being, we have included it in the supplementary material (Section 6). Our model has a lower number of parameters with respect to ViPC and VRC and is also faster in inference due to the efficient decoder structure. \n\n> *Most of the competitive methods only take the point cloud as input. Adding the RGB information will definitely improve the completion results. The proposed method outperforms ViPC significantly, but ViPC is not well compared to the other methods. PSGN (Fan et al., CVPR'17) should also be regarded as a simple baseline.*\n\nSingle view reconstruction methods like PSGN only take an image as input, thus being at a disadvantage with respect to the multimodal completion methods like ViPC and XMFNet that also exploit a partial point cloud. In their original paper (Zhang et al. , “View Guided Point Cloud Completion '', CVPR’21), the authors compare it against the suggested PSGN method, concluding that ViPC achieves superior performance. Indeed, the overall Chamfer Distance of PSGN on ShapeNet-ViPC is 7.092, the one of ViPC is 3.308 and ours is 1.443. For this reason we did not include comparisons against single-view reconstruction baselines.\n\n> *Maybe it is harsh, but the manuscript does not bring new technical contributions.*\n\nWe think that the combination of implicit fusion at the feature level and novel design elements such as the attention-based upsampling module provide a substantial improvement with respect to the existing literature based on the single-view reconstruction and explicit fusion paradigm. Moreover, we are the first to propose a self-supervised setting for multimodal completion that could be useful when complete 3d scans are unavailable; this contribution has been particularly appreciated by other reviewers. \n\nFor what concerns limitations, it would be interesting to extend our study to a real-world context, which is undoubtedly more sophisticated and articulated, but it would be a non-trivial extension due to the presence of various additional aspects discussed below. Thus, having a preliminary step in which the problem is analyzed in a controlled setting is a good and routine practice. Indeed, the vast majority of papers related to point cloud completion focus on synthetic datasets, and the state of the art for multimodal completion (ViPC) did not consider real datasets for benchmarking. Furthermore, real scenes often introduce unique issues that may necessitate ad hoc changes to the architecture and proper design decisions; for these reasons, applying the proposed framework to scenes has been left for future work. Moreover, real datasets such as ScanNet or KITTI shapes do not have ground truth complete point clouds and the evaluation is qualitative, sometimes performed with a subjective study as in Pan et al. “Variational Relational Point Completion Network”. Regarding the potential negative societal impacts of our study, we do not see a direct connection towards malicious usage of the shape completion method presented in the paper.\n",
" We thank the reviewer for the constructive feedback and for all the suggestions to improve the paper. \nWe address the different highlighted weaknesses separately: \n\n> *1. In the introduction, their novel method is proposed to address one limitation in ViPC. However, the strategy used in ViPC is explicit, while the proposed method is implicit. Actually, once the fusion strategy happens in the feature domain, it is difficult to say what happens during the fusion process. If the authors want to say that their proposed method significantly outperforms ViPC, they should better clarify the limitations of ViPC.*\n\nViPC is bottlenecked by the need to reconstruct a coarse point cloud from the image using single view reconstruction techniques, thus performing the fusion explicitly; this is a generally hard inverse problem in itself, and full of pitfalls. Our findings show that their approach is sub-optimal and implicit fusion achieves better performance. The claim “the proposed method significantly outperforms ViPC” is supported by the strong experimental results obtained in the experiments.\n\n>*2. In the checklist, the authors say that they have disccused the limitations in their manuscript, while it is actually just a hand-wavy sentence in the conclusion. I do not think this is a serious discussion for the limitation of this work.*\n\nWe agree with the reviewer that we have not discussed limitations at length. Indeed, our major limitation is the lack of study of a real-world scenario for the proposed framework. It would be interesting to extend our study to a real-world context, which is undoubtedly more sophisticated and articulated, but it would be a non-trivial extension due to the presence of various additional aspects discussed below. Thus, having a preliminary step in which the problem is analyzed in a controlled setting is a good and routine practice. Indeed, the vast majority of papers related to point cloud completion focus on synthetic datasets, and the state of the art for multimodal completion (ViPC) did not consider real datasets for benchmarking. Furthermore, scenes introduce unique issues that may necessitate ad hoc changes to the architecture and proper design decisions; for these reasons, applying the proposed framework to scenes has been left for future works. Moreover, real datasets such as ScanNet or KITTI shapes do not have ground truth complete point clouds and the evaluation is qualitative, sometimes performed with a user study as in Pan et al. “Variational Relational Point Completion Network”. We will include an extensive discussion of the limitations in the final version of the paper.\n\n> *3. What is the main advantage of the proposed self-supervised loss. It seems that it is a very important component in this framework. However, I cannot find the main contribution of this module, considering the existence of the reconstruction loss. Besides, in the ablation study, the authors also say that this module just allows the training process to have a faster and smoother convergence. This description makes the module less important. Is is possible to provide more results like Figure 5?*\n\nWe would like to clarify that our work considers two separate settings: i) the supervised setting in which one has access to complete points and just employs the supervised reconstruction loss; ii) the self-supervised one (or weakly-supervised following the comment of reviewer dnf9) for which complete shapes are not available and where we propose our self-supervised loss combining partial reconstruction, mixup and rendering for pseudo-supervision via the auxiliary image. The two approaches are not mixed together. Indeed, when a complete ground truth point cloud is available, as in the supervised setting, the reconstruction loss using the complete shape is extremely powerful, and the contribution of a mixed strategy with the rendering fidelity term does not increase performance. However, in circumstances where 3D ground truths are not available, as in the self-supervised setting that we propose, the weaker supervision provided by the rendering loss becomes extremely valuable, allowing our method to approach the performance of supervised methods despite our lack of complete shapes.\n\n> *4. In the third column of Figure 4, the results obtained by XMFNet cannot outperforms PCN and VRCnet.*\n\nIt is true that in Table 3 (we assume the reviewer intended to reference Table 3 instead of Fig.4) the self-supervised XMFNet cannot outperform the supervised counterpart of PCN and VRCNet. However, those are baselines trained in a supervised manner with access to complete shapes, while we are testing our proposed method in the self-supervised approach. It is thus remarkable that despite the lack of ground truth, the proposed method is able to approach the results of those supervised baselines and sometimes outperform them.",
" We thank the reviewer for the constructive feedback and for all the suggestions to improve the paper. \nWe address the different highlighted weaknesses separately: \n\n>*(1) This paper is heavily based on the previous work ViPC. The core differences in the fully-supervised setting are only cross-modal fusion (attention), which is widely used for other multi-modalities tasks.*\n\nIt is true that the cross-modal attention has been used in the literature for other multi-modality tasks, however we are the first to propose its usage in this specific setting, where the irregular domain of a 3D point cloud and an image constitute our modalities. We believe that the shape completion task could be advanced by taking advantage of the cross-attention. Moreover, as we underline in the next point, besides the fusion module we introduce some other novel elements in the design of the completion model.\n\n>*(2) The Unimodal result in Table 3 is already higher than that of previous art (multi-modal), however, there is no clear illustration of why baseline performs pretty well. In addition, the performance improvement through introducing an auxiliary view-image seems marginal.* \n\nThe fact that our unimodal baseline already outperforms the previous multimodal state of the art underlines the limitations of the previous approach (ViPC). Our unimodal architecture's effectiveness is primarily due to our powerful attention-based upsampling module, combined with our encoder's local feature extraction abilities and the stacked self-attention layers for point cloud features, which model long-range relationships between local features at various levels.\nWhile the improvement provided by the auxiliary image may seem small in the “average” Chamfer Distance results, we want to underline that this average is performed over a large number of views, many of which presenting similar occlusions to the partial point point, thus failing to provide substantial extra information. When a good view is available, the improvement can be substantial (refer to Fig.3 for an example).\n\n> *(3) I think the self-supervised method in this paper seems overclaimed. Since auxiliary view-image is introduced in the supervision which contains partial information for the point cloud ground truth, it seems more like a weakly-supervised setting.*\n\nWe termed the setting as self-supervised since we do not possess the ground truth complete 3d shape as supervision. In the unimodal point cloud literature, this situation is commonly referred to as self-supervised. However, in the suggested multimodal context, we agree with the reviewer that a form of ground truth is actually available, albeit as a degraded 2D projection. As a result, we recognize that it is preferable to refer to the proposed solution as weakly-supervised, and we will change this in the final version of the paper.\n\n> *4) There is no ablation for the DCD loss.*\n\nWe omitted this ablation due to space constraints, but we will include it in the final paper. We added it in the supplementary material (Section 5) where we also report a qualitative visualization to show the effectiveness of DCD at improving visual quality with more uniform shapes. \n\n>*Limitations*\n\nThe main limitation of our work is the lack of study of a real-world scenario for the proposed framework. Indeed, it would be interesting to extend our study to a real-world context, which is undoubtedly more sophisticated and articulated, but it would be a non-trivial extension due to the presence of various additional aspects discussed below. Thus, having a preliminary step in which the problem is analyzed in a controlled setting is a good and routine practice. Indeed, the vast majority of papers related to point cloud completion focus on synthetic datasets, and the state of the art for multimodal completion (ViPC) did not consider real datasets for benchmarking. Furthermore, real scenes often introduce unique issues that may necessitate ad hoc changes to the architecture and proper design decisions; for these reasons, applying the proposed framework to scenes has been left for future work. Moreover, real datasets such as ScanNet or KITTI shapes do not have ground truth complete point clouds and the evaluation is qualitative, sometimes performed with a subjective study as in Pan et al. “Variational Relational Point Completion Network”. Regarding the potential negative societal impacts of our study, we do not see a direct connection towards malicious usage of the shape completion method presented in the paper.\n",
" This paper tackles the problem of point cloud completion through using an incomplete point cloud and an auxiliary view-image as input. Moreover, this paper proposes using view-image as supervised signal and conducting self-supervised point cloud completion. Strengths: \n(1) This paper achieves state-of-the-art results on point cloud completion. (2) Furthermore, it is the first proposing self-supervised setting for point cloud completion task, and I think it is the core contribution of this paper. Also, by only using the self-supervised setting, this paper achieves a comparable result with previous fully-supervised manners. (3) The writing for this paper is relatively straightforward and well-organized.\n\nWeakness:\n (1) This paper is heavily based on the previous work ViPC. The core differences in the fully-supervised setting are only cross-modal fusion (attention), which is widely used for other multi-modalities tasks. \n(2) The Unimodal result in Table 3 is already higher than that of previous art (multi-modal), however, there is no clear illustration of why baseline performs pretty well. In addition, the performance improvement through introducing an auxiliary view-image seems marginal. \n(3) I think the self-supervised method in this paper seems overclaimed. Since auxiliary view-image is introduced in the supervision which contains partial information for the point cloud ground truth, it seems more like a weakly-supervised setting.\n (4) There is no ablation for the DCD loss. See the weakness. The authors have not addressed the limitations and potential negative societal impact of this work.",
" This method proposes a corss-modal learning for image-guided point cloud shape completion. In this work, they employ a mixed strategy to train the network in a supervised and self-supervised manner. The image-guided framework seems to be an interesting setting for this problem. On the other hand, the use of self-supervised stratgey also increases the generalization and robustness of this framework. I have several conerns for this work:\n1. In the introduction, their novel method is proposed to address one limitation in ViPC. However, the strategy used in ViPC is explicit, while the proposed method is implicit. Actually, once the fusion strategy happens in the feature domain, it is difficult to say what happens during the fusion process. If the authors want to say that their proposed method significantly outperforms ViPC, they should better clarify the limitations of ViPC.\n2. In the checklist, the authors say that they have disccused the limitations in their manuscript, while it is actually just a hand-wavy sentence in the conclusion. I do not think this is a serious discussion for the limitation of this work. \n3. What is the main advantage of the proposed self-supervised loss. It seems that it is a very important component in this framework. However, I cannot find the main contribution of this module, considering the existence of the reconstruction loss. Besides, in the ablation study, the authors also say that this module just allows the training process to have a faster and smoother convergence. This description makes the module less important. Is is possible to provide more results like Figure 5? \n4. In the third column of Figure 4, the results obtained by XMFNet cannot outperforms PCN and VRCnet. I have shown all my concerns in above sections. I have read the rebuttal and keep my original rating. Thanks.",
" The authors propose a new method for point cloud completion from the depth and RGB images. With the proposed cross attention and self-attention, the RGB and depth information can be better leveraged. Experimental results indicate that the proposed method outperforms the state-of-the-art methods. The self-supervised approach also outperforms a number of supervised methods. **Strengths**\n\n- The paper is clearly written.\n- The experimental results are solid in both supervised learning and self-supervised learning.\n\n**Weaknesses**\n\n- The running time and model size comparison should be included.\n- Most of the competitive methods only take the point cloud as input. Adding the RGB information will definitely improve the completion results. The proposed method outperforms ViPC significantly, but ViPC is not well compared to the other methods. PSGN (Fan et al., CVPR'17) should also be regarded as a simple baseline.\n- Maybe it is harsh, but the manuscript does not bring new technical contributions. Please refer to the weaknesses. In the Introduction, the authors claim that the application of the proposed method is to complete the missing shape of an object due to the wide use of RGBD cameras. However, the manuscript does not provide any experiments on real-world scenarios.",
" The paper proposes a point cloud completion approach with images as auxiliary information. The main challenge while utilizing the information from image for point cloud competition is effectively fusing local context between image and partial point cloud. This paper proposes using cross attention modules to fuse the local information from the image and point cloud. Furthermore, to supervised the neural network, two losses are used: 1) supervised loss based on chamfer distance between predicted and ground truth point cloud and 2) self-supervised silhouette reconstruction loss with the help of a differentiable rendering of predicted point cloud.\nExperiments are done on ShapeNet-ViPC and the proposed approach outperforms the baselines.\n **Strengths:**\n1. The problem statement is clear, contributions are clearly highlighted and the method section is well written.\n2. Fusion of local information between two modalities with the help of cross attention is a good idea considering that the point cloud is a partial representation of the original shape and direct one-to-one mapping between two modalities doesn't exist.\n3. Interestingly the differentiable rendering of the silhouette gives rather a large improvement even though the silhouette is a rather weak signal.\n\n**Weakness**\n1. The main limitation of the work is its benchmarking, which is done on ShapeNet shapes. Though mentioned by the authors are the limitation, it still makes the paper rather weak because we don't know how well it works in complicated scenes (ScanNet).\n2. The loss function is not fully ablated for example density aware Chamfer Distance.\n3. Error bars are not shown in the experiments. I understand that the benchmark shows the best performing result, still, error bars can be shown in a separate table or in the Supplementary material.\n4. Why do experiments in Table 1 only show 8 categories whereas the original dataset consists of 13 categories?\n5. The mixup loss only provides a tiny improvement, is it worth including it in the training? How much computational overhead this does loss contribute to the training?\n\n 1. Explain M, N, and F in Figure 1. The figure and its captions should be self-contained. Try to have consistent notations in Eq 3,4 and the Figure 1.\n2. What is \\beta in the Eq. 8?\n3. How are hyper parameters selected? Is there is a validation set?\n4. To make a more convincing argument, the Figure 3 can be modified to show an average over several shapes.\n\n I think more qualitative results indicating failure cases should be provided."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
5,
4
] | [
"ATc87KYJQz4",
"-vkIb8PYi7m",
"rIZHcz7ztY3",
"YyfHHX8C2sy",
"InZkK0-wInM",
"NkP8w3Jt_Ij",
"nips_2022_bi1BTcXa8Q",
"nips_2022_bi1BTcXa8Q",
"nips_2022_bi1BTcXa8Q",
"nips_2022_bi1BTcXa8Q"
] |
nips_2022_dp0zWsdOV1h | Retrieve, Reason, and Refine: Generating Accurate and Faithful Patient Instructions | The "Patient Instruction" (PI), which contains critical instructional information provided both to carers and to the patient at the time of discharge, is essential for the patient to manage their condition outside hospital. An accurate and easy-to-follow PI can improve the self-management of patients which can in turn reduce hospital readmission rates. However, writing an appropriate PI can be extremely time consuming for physicians, and is subject to being incomplete or error-prone for (potentially overworked) physicians. Therefore, we propose a new task that can provide an objective means of avoiding incompleteness, while reducing clinical workload: the automatic generation of the PI, which is imagined as being a document that the clinician can review, modify, and approve as necessary (rather than taking the human "out of the loop"). We build a benchmark clinical dataset and propose the Re$^3$Writer, which imitates the working patterns of physicians to first retrieve related working experience from historical PIs written by physicians, then reason related medical knowledge. Finally, it refines the retrieved working experience and reasoned medical knowledge to extract useful information, which is used to generate the PI for previously-unseen patient according to their health records during hospitalization. Our experiments show that, using our method, the performance of 6 different models can be substantially boosted across all metrics, with up to 20%, 11%, and 19% relative improvements in BLEU-4, ROUGE-L, and METEOR, respectively. Meanwhile, we show results from human evaluations to measure the effectiveness in terms of its usefulness for clinical practice. | Accept | The authors propose and evaluate a method to automatically generate "patient instruction" drafts. There was a consensus amongst reviewers that this is an interesting application.
While the technical innovation here may be modest, the empirical results firmly establish the benefits of the proposed "Re3Writer" approach. The ablations provided (both in the original submission and during author response) further strengthen the contribution.
Furthermore, the task definition and accompanying dataset (derived from MIMIC) constitute contributions which may lead to follow-up work. That said, in revised versions of the paper the authors should include additional details about the data and be explicit that they will release this (as mentioned by reviewer oy13). | train | [
"A_w7N1M0F4j",
"XHt8lGjFzBh",
"YhwUOr-oXT",
"OAnf_CHn_A",
"wQJFyglIT9m",
"ieD96g-LOlF",
"k40K1LyJS8",
"so62kMf_v2f",
"0DTHNxlQzlr",
"VEChKQ5masm",
"h9brTxqaQDY",
"cMxZ9vWjEqo"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for acknowledging the response. We are genuinely happy that our response properly addresses fellow reviewers' concerns. We thank the reviewer again for the constructive feedback which have helped us improve our paper!\n",
" We thank the reviewer for acknowledging the response. We are genuinely happy that our response properly addresses fellow reviewers' concerns. We thank the reviewer again for the constructive feedback which have helped us improve our paper!",
" Dear Reviewer Wpt8:\n\nThanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of the paper!\n\nSince the author-reviewer discussion period will end soon in 1 day, we appreciate it if you take the time to read our response and give us some feedback. Please don't hesitate to let us know if there are any additional clarifications or experiments that we can offer, we appreciate your suggestions. If our response resolves your concerns, we kindly ask you to consider raising the rating of our work.\n\nMany thanks for your time and efforts!\n\nWarmest regards,\n\nAuthors",
" Thank you for considering my suggestions. I don't have any further concerns/questions. ",
" Thank you for considering my suggestions! I appreciate the quick turnaround on results and it is good to see that considering demographic information does help in improving the model performance further. I don't have any further concerns/questions. Thank you.",
" Thanks for your helpful comments! If you have further concerns, please feel free to contact us.\n\n> **Q1**: Additional ablation studies about what type of notes (nursing notes, physician notes, radiology reports) contributes more in the ReWriter performance.\n\n**A1**: Thank you for pointing out a potential analysis point! We follow your advice to further conduct ablation studies to understand what type of notes (i.e., nursing notes, physician notes, radiology reports) contribute more to our approach. In detail, we remove the nursing notes/physician notes/radiology reports from the input to evaluate the performance of our approach. The results are as follows:\n\n|Methods|METEOR|ROUGE-1|ROUGE-2|ROUGE-L|BLEU-1|BLEU-2|BLEU-3|BLEU-4|\n|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|Ours (Seq2Seq)|**20.9**|**40.8**|**21.9**|**38.6**|**43.2**|**34.2**|**29.7**|**26.8**|\n|  w/o Physician Notes|19.7|39.5|20.7|37.4|40.2|31.7|27.3|24.5|\n|  w/o Nursing Notes|20.3|39.9|21.6|37.9|41.3|32.9|28.6|26.0|\n|  w/o Radiology Reports|20.4|40.5|21.8|38.4|41.5|33.0|28.7|26.0|\n||\n|Ours (Transformer)|**23.7**|**45.8**|**24.4**|**42.2**|**52.4**|**41.2**|**35.0**|**30.5**|\n|  w/o Physician Notes|21.8|42.1|22.8|39.9|49.6|39.3|33.4|28.8|\n|  w/o Nursing Notes|22.9|44.3|23.5|40.7|51.3|40.1|34.2|29.7|\n|  w/o Radiology Reports|23.4|45.6|24.3|42.1|52.0|40.5|34.6|30.1|\n\nAs we can see, removing the Physician Notes significantly decreases the performance of our approach, indicating that the Physician Notes contribute more to the performance of patient instruction (PI) generation. We will include more analyses in our revision.\n\n> **Q2**: Will authors publish the PI dataset?\n\n**A2**: Yes! We promise to release the PI dataset and the codes, including the pre-processing codes, training codes and testing codes, which have been attached to the supplementary material, upon publication. Meanwhile, we promise to provide detailed instructions to guide the readers to 1) build and pre-process our PI dataset; 2) train the baseline model and our proposed model; and 3) generate the PIs for previously-unseen patients using trained models.\n\n> **Q3**: Could authors provide more details about the dataset generation?\n\n**A3**: Yes, we promise to provide detailed instructions to help readers to construct and pre-process the PI dataset in our released codes. \nMeanwhile, here are the details to construct our PI dataset:\n\n1) For each patient in the MIMIC-III v1.4 resource, the dataset includes various patient’s health records during hospitalization, we directly concatenated all available health records during hospitalization as input for a patient, e.g., admission notes, nursing notes, and radiology reports. For example, if a patient only has admission notes, our model will just rely on the available admission notes to generate the PI (L221-L224).\n\n2) In the MIMIC-III v1.4 resource, the patient instructions (i.e., discharge instructions) are included in discharge summaries. Therefore, given raw entries of MIMIC-III, we first used the discharge summaries to extract the patient instructions and excluded entries without patient instructions and those with less than five-word counts in patient instructions. The remaining 35,851 entries involve 28,029 unique patients, 35,851 hospital admissions, and 35,851 pairs of health records and patient instructions (L207-L209).\n\n3) We further extracted the clinical codes, including diagnosis codes, ICD-9 medication codes, and ICD-9 procedure codes, of 35,851 hospital admissions from the MIMIC-III (L141-L142).\n\n4) As a result, each entry in our built PI dataset is associated with a hospital admission ID, a patient ID, the clinical codes, the patient’s health records, and a patient instruction.\n\n5) At last, we randomly split the dataset into 80%-10%-10% train-validation-test sets according to the patient ID. Therefore, there is no overlap of patients between train, validation and test sets (L210-L211). \n\n> **Q4**: Does eICU[*] dataset have the text notes?\n\n**A4**: Many thanks for recommending the eICU dataset. It's a nice dataset. We noticed that there are lots of useful notes, e.g., care documentation and care plans, in the eICU dataset. Therefore, it's possible to build a new task of automatic care plan generation, which aims to use care documentation as input to generate the care plans. In particular, our experiments on the PI dataset verify the effectiveness and the generalization capability of our proposed approach in generating clinical text given clinical notes as input. It implies that our approach has the potential to be applied to other clinical text generation tasks, e.g., the care plan generation task in the eICU dataset. We will follow the promising direction to adapt our approach to the eICU dataset in our next version.\n\n> **Q5**: Some relevant citations are missing.\n\n**A5**: Thanks for your suggestion. We will take your advice to cite and compare with more works in image-based medical report generation.",
" Thanks for your helpful comments! If you have further concerns, please feel free to contact us.\n\n> **Q1**: During train and test split, did the authors make sure that the same patient is not used in train and test?\n\n**A1**: We have made sure that the same patient is not used in both training and testing. The statistics of our dataset in Table 1 show that the average hospitalization rate per patient is 28,673/22,423=1.28. As described in L210-L211, we randomly divided the dataset into train-validation-test partitions according to patient ID instead of hospital admission ID. Therefore, there is no overlap of patients between train, validation and test sets. The goal of our work is to generate patient instructions (PIs) for previously-unseen patients.\n\n> **Q2**: Did you consider UMLS for knowledge injection and creating the final graph?\n\n**A2**: Thank you for the suggestion. According to the ablation study in Table 5, the incorporation of a relatively small knowledge graph built on the internal training set has already brought 8% relative improvement in BLEU-4. We agree with your point that a more complex and larger graph structure, which is constructed by using larger-scale well-defined external medical ontologies/textbooks, e.g., UMLS, can further boost the performance and the generalization capability of the approach. And it is a good direction to use the UMLS to boost the performance and particularly process a new patient with an unseen new ICD code. We will explore it in future works. \n\n> **Q3**: The patients were matched via their procedure, diagnostic codes but their other demographic/personal information such as age and gender wasn't considered.\n\n**A3**: Thank you for pointing out a potential analysis point. We follow your advice to incorporate age and gender information into our approach to match patients. Specifically, to ensure an even distribution of the data, we divide the ages into three age-groups: Age < 55 (29.9%), 55 <= Age < 70 (30.5%), and Age >= 70 (39.7%). As a result, given a new male/female patient at 61 years old, we will match male/female patients in the age-group 55 <= Age < 70 in the training data to generate the PIs. The results are reported in the Table below.\n\n| Methods | METEOR | ROUGE-1 | ROUGE-2 | ROUGE-L | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 |\n| :------------------- | :------------: |:------------: |:------------: |:------------: |:------------: |:------------: |:------------: |:------------: |\n| Seq2Seq [1] | 19.9 |39.0 |20.3 |37.1 |41.6 |32.5 |27.9 |25.1 |\n|   with Re$^3$Writer |20.9 |**40.8** |21.9 |38.6 | 43.2 | 34.2 |29.7 | 26.8 |\n|    with Age+Gender | **21.0** | **40.8** | **22.0** | **38.7** | **43.5** | **34.5** | **29.9** | **27.1** |\n| | | | | | |\n| Transformer [43] | 21.8 | 42.1 |21.6 | 38.9 |47.1 |36.8 | 31.4 |27.3| \n|   with Re$^3$Writer | 23.7 | 45.8 |24.4 | 42.2 | 52.4 | 41.2 | 35.0 | 30.5 |\n|    with Age+Gender | **24.1** | **46.1** | **24.6** | **42.5** | **52.9** | **41.6** | **35.3** | **30.8** | \n\nThe results show that the incorporation of demographic/personal information can indeed further boost the performance, we will follow your constructive advice to conduct detailed analyses in our revision.\n\n> **Q4**: Why not consider a pre-trained model?\n\n**A4**: Thank you for the advice. For the encoder of our approach, we adopted the pre-trained model BERT to encode the information of matched (i.e., retrieved) patients. For the decoder of our approach, which is used for language generation, in our preliminary experiments, we attempted to adopt the pre-trained model T5 [R1] to generate the final PIs. The results of the pre-trained model T5 and the model trained from scratch are:\n\n| Methods | METEOR| ROUGE-1 | ROUGE-2 | ROUGE-L | BLEU-1| BLEU-2| BLEU-3| BLEU-4|\n| :------- | :------------: | :----------: | :----: | :----: | :----: | :----: | :---------: | :---------: |\n| T5 [R1] | 20.5 | 41.2 | 21.3 | 38.7 | 44.6 | 34.7 | 29.8 | 26.1 |\n| Ours | **23.7** | **45.8** | **24.4** | **42.2** | **52.4** | **41.2** | **35.0** | **30.5** |\n\nAs we can see, the pre-trained model T5 performs worse than our model trained from scratch. We speculate that the T5 was pre-trained on general texts. Therefore, it's necessary to fine-tune the T5 to well adapt to the generation of clinical text. However, considering the expensive computational resources consumed by the pre-trained model T5, we did not further fine-tune the pre-trained model to take T5 as our decoder.\n\n[R1] Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 2020.",
" > **Q3**: Further technical explanations are needed (e.g., weights between nodes).\n\n**A3**: The weights between nodes are calculated by normalizing the co-occurrence of pairs of nodes in the training corpus. In detail, as shown in L159-L160 and Appendix A in our Supplementary Material, we consider all clinical codes (including diagnosis codes, medication codes, and procedure codes) during hospitalization as nodes, i.e., each clinical code corresponds to a node in the graph. The weight between two nodes is calculated by the normalized co-occurrence of these two nodes counted from the training corpus. Figure 1 in the Appendix illustrates how the medical knowledge graph is built. In Figure 4, 0.013 means that the frequency of co-occurrence of \"abdominal pain\" and \"urinary tract infection\" is 0.013.\n\n> **Q4**: How the optimum model was selected?\n\n**A4**: We selected the optimum model based on the performances on the validation set. Specifically, for the hyper-parameter $N_\\text{P}$ (the number of retrieved previous PIs), we reported the results and analyses of different $N_\\text{P}$ in Appendix B of Supplementary Material. For the learning rate, we reported the effectiveness of our model in boosting the robustness of model to a wide range of learning rates in Figure 3. For the selected modules, we reported the performances and analyses of different modules in Table 5 and L280-L303 in our paper.\n\n> **Q5**: How it is built and tested? Did the authors build the deep-learning approach on this dataset? If so, how can authors make sure of its applicability to real-world health data (e.g., clinical validation)?\n\n**A5**: To build the dataset, as shown in L193-L199 of our paper, we collected and built the benchmark clinical dataset Patient Instruction (PI) based on the publicly-accessible MIMIC-III v1.4 resource (https://physionet.org/content/mimiciii/), which integrates de-identified, comprehensive clinical data for patients admitted to the Beth Israel Deaconess Medical Center in Boston, Massachusetts, USA. It means that the data we built was originally produced in real-world clinical settings, not synthetic or machine-generated.\n\nTo test the dataset, in Section 4.2 of our paper, we incorporate the proposed approach into six representative language generation deep-learning models with different structures. As shown in Table 2 of our paper, our approach can boost the performance of baselines across all metrics. Meanwhile, in Section 4.3 of our paper, we further invited doctors to conduct the clinical validation to measure the effectiveness of our approach in terms of its usefulness for clinical practice. Table 3 and Table 4 show that our approach can generate meaningful and desirable PIs that are recognized by clinicians, indicating that our approach has the potential to assist physicians and reduce their workload.\n\n",
" Thanks for your helpful comments! If you have further concerns, please feel free to contact us.\n\n> **Q1**: How did the authors analyse the accuracy of the generated PIs?\n\n**A1**: We analysed the accuracy of the generated Patient Instructions (PIs) using both automatic metrics and human evaluation.\n\nFor automatic evaluation, as shown in Table 2, we considered the widely-used natural language generation metrics, i.e., BLEU, ROUGE and METEOR, which are calculated based on the text matching between the generated PIs and referenced PIs annotated by professional clinicians. L238-L246 and Table 2 show that our approach can boost baselines consistently across all metrics, with relative improvements up to 20%, 11% and 19% in BLEU-4, ROUGE-L and METEOR, respectively.\n\nMeanwhile, as shown in Table 3 and Table 4 in Section 4.3, we invited three annotators, including an experienced clinician and two senior medical school students, to evaluate the generated PIs according to the quality and usefulness in clinical practice. All three annotators have sufficient medical knowledge. Specifically, L258-L264 and Table 3 evaluate the generated PIs from three perspectives: fluency, comprehensiveness and faithfulness. Table 4 shows that the proposed approach can generate more accurate PIs than the baselines, improving the usefulness of AI systems for better assisting physicians in clinical decision-making and reducing their workload.\n\n> **Q2**: The comparison of the output of the proposed deep-learning approach with more datasets/examples.\n\n**A2**: Thank you for pointing out a potential analysis point. In this paper, we evaluated the proposed approach with the MIMIC-III dataset which is real clinical data. We follow your constructive advice to evaluate the performance of our approach with more fine-grained datasets.\nSpecifically, we further divide MIMIC-III into three sub-datasets according to **Age**, **Gender**, and **Disease**. Three tables below show the results of our approach Re$^3$Writer on the three sub-datasets:\n\n|Age Group||Age<55|||||55<=Age <70|||||Age>=70|||\n| :------ | :--------: | :----------: | :--------: | :------: | :------: | :---------: | :--------: | :-------: | :------: | :----------: | :--------: |:-------: |:--------: |:--------: |\n| Methods| METEOR| ROUGE-L| BLEU-4| | | METEOR| ROUGE-L| BLEU-4| | | METEOR| ROUGE-L| BLEU-4|\n| Seq2Seq [1]| 18.2| 34.7| 21.9||| 20.7| 39.5| 26.7||| 20.7| 37.1| 26.5|\n|   with Re$^3$Writer | **19.2** | **35.6** | **23.7** || | **21.8** | **41.2** | **28.4** || | **21.5** | **38.9** | **28.1** |\n||||||||||||||| \n| Transformer [43] | 20.2| 36.9| 24.4|| | 23.1| 41.3| 28.5| | | 22.8| 39.0| 28.4 |\n|   with Re$^3$Writer | **22.9** | **40.1** | **28.5** | | | **26.2** | **45.0** | **31.8** || | **24.3** | **42.7** | **31.2** |\n\n\n|Gender|| Female|||||| Male || \n| :--------- | :---------: | :---------: | :--------: | :--------: | :------: | :------: | :--------------: | :------: | :--------: | \n| Methods| METEOR | ROUGE-L | BLEU-4 | | | | METEOR | ROUGE-L | BLEU-4 | \n| Seq2Seq [1] | 19.8 | 35.9 | 25.0 | | | | 20.0 | 38.0 | 25.2 |\n|   with Re$^3$Writer | **20.6** | **37.6** | **26.3** | | | | **21.1** | **39.5** | **27.2** |\n||||||||||||| \n| Transformer [43] | 21.5 | 38.1 | 26.9 | | | | 22.0 | 39.6 | 27.6 |\n|   with Re$^3$Writer | **23.2** | **41.3** | **30.1** | | | | **24.1** | **43.0** | **30.8** |\n\n\n|Disease|| Hypertension|||||| Hyperlipidemia |||||| Anemia|||\n| :----------- | :----------: | :--------: | :--------: | :------: | :------: | :-------: | :------: | :-------: | :-----: | :----------: | :-----------: | :------: |:------: |:------: |:------: |:------: |\n| Methods| METEOR | ROUGE-L | BLEU-4 |||| METEOR | ROUGE-L | BLEU-4 |||| METEOR | ROUGE-L | BLEU-4|\n| Seq2Seq [1]| 21.3 | 39.8 | 27.9 |||| 21.3 | 41.7 | 27.2 |||| 18.0 | 36.4 | 20.7 |\n|   with Re$^3$Writer | **22.6** | **41.4** | **30.4** |||| **22.5** | **43.9** | **29.5** |||| **18.8** | **37.6** | **22.3** |\n||||||||||||| \n| Transformer [43] | 22.8| 42.5| 30.7|||| 23.0| 44.7| 30.3 |||| 19.6| 38.2| 23.4|\n|   with Re$^3$Writer | **24.6** | **45.1** | **33.5** |||| **24.9** | **46.4** | **33.8** |||| **21.8** | **41.3** | **27.4** |\n\nAs we can see, the proposed approach can consistently boost baselines across different ages, genders and diseases on all evaluation metrics, proving the generalization capability and the effectiveness of our method to different datasets/examples. We will follow your advice to include more detailed analyses in our revision.",
" This paper introduces a deep-learning approach called Re3Writer to imitate the clinicians’ working patterns for automatically generating patient instructions at the point of discharge from the hospital. Strengths:\n\n1. Topic is timely\n2. The application is interesting\n3. The paper is easy to follow\n\nWeaknesses:\n1. Lack of technical details (e.g., hyper parameter tuning, information about node transitions/weights)\n2. Lack of output validation\n3. Novelty is limited\n The application was tested using one generated dataset. How did the authors analyse the accuracy of the generated PIs? \nI would suggest performing the comparison of the output of the proposed deep-learning approach with more datasets/examples. Otherwise, it is hard to assess the novelty of the work.\n\nFurther technical explanations are needed (e.g., weights between nodes). For example, In Figure 4, under the reasoned medical knowledge box. What does 0.013 mean? \n\nHow the optimum model was selected? Further information regarding hyper-parameter tuning can be useful.\n\nBuilding a PI dataset is defined as a novelty (contribution). How it is built and tested? Did the authors build the deep-learning approach on this dataset? If so, how can authors make sure its applicability to real-world health data (e.g., clinical validation)? The proposed approach is interesting. However, further technical details, validation and also the comparison of the deep-learning application on more datasets are needed to assess its novelty. ",
" In this paper, the authors have proposed a new task of generating patient instructions (PIs) using the MIMIC dataset. The authors also proposed a strong SOTA model for the task Re3Writer which takes into consideration the historic PIs, clinical domain knowledge and refining of the final generated PI for a patient. The authors show that their method can improve the performance of a lot of baseline models by following the three techniques defined in the ReWriter framework. Strength of the paper: \n1. It tackles an important problem and given it is created with the help of an openly available dataset (MIMIC), other researchers can also access the dataset and build further models.\n2. The ReWriter approach is very intuitive as it considers the past PIs which generating the PI for a new patient and then considers the medical knowledge for refining it. This medical knowledge further helps the model in generating clinically coherent PIs.\n3. All experiments are well defined but I really like the Table 5 ablation study because it justifies the use of each step in ReWriter. \n\nWeakness/Questions:\nThese are some things on which I would like to hear from the authors.\n1. During train and test split, did the authors make sure that the same patient is not used in train and test. Even if the HADM (hospital admission ID) is different, a same patient can have similar PIs such as older patients can have same recommendations during multiple visits.\n2. Did you consider UMLS for knowledge injection? Because the information of a new patient with a new ICD code that has not been used before in the train set would get ignored. Whereas UMLS can help with that information, if you can run an entityLinker to get different clinical entities and then create your final graph. \n3. The patients were matches via their procedure, diagnostic codes but their other demographic/personal information such as age and gender wasn't considered. A patient with same ICD codes but with different age-group could potentially be assigned different PIs because of their health.\n4. Note: I might be wrong here but why not consider a pre-trained model? Is that because of constraint of the note length? Weakness/Questions:\nThese are some things on which I would like to hear from the authors.\n1. During train and test split, did the authors make sure that the same patient is not used in train and test. Even if the HADM (hospital admission ID) is different, a same patient can have similar PIs such as older patients can have same recommendations during multiple visits.\n2. Did you consider UMLS for knowledge injection? Because the information of a new patient with a new ICD code that has not been used before in the train set would get ignored. Whereas UMLS can help with that information, if you can run an entityLinker to get different clinical entities and then create your final graph. \n3. The patients were matches via their procedure, diagnostic codes but their other demographic/personal information such as age and gender wasn't considered. A patient with same ICD codes but with different age-group could potentially be assigned different PIs because of their health.\n4. Note: I might be wrong here but why not consider a pre-trained model? Is that because of constraint of the note length? Currently in this draft, no serious limitations are discussed. It is great to see Fig.4 where the model performs exceptionally well than the baseline model, it would be great to see some examples where ReWriter is consistently wrong.",
" The paper proposed the way (Re$^{3}$Writer) to automatically generate the Patient Instruction (PI) to reduce the workload of clinicians. Re$^{3}$Writer consists of three components: Retrieve (to retrieve instructions of previous patients according to the similarity of diagnosis, medication and procedure), Reason (to reason medical knowledge into the current input patient data with the help of the Knowledge Graph), and Refine (to refine relevant information from the retrieved working experience and reasoned medical knowledge to generate final PIs). The authors built a dataset PI which is based on the MIMIC 3 data and showed experimentally that with the help of their method performance of 6 different SOTA models can be substantially boosted across all metrics (BLEU-4, ROUGE-L, METEOR). They also showed the results from human evaluations to evaluate the approach for the clinical practice routine. Strength:\n\n1. Overall the paper is well organised, easy and enjoable to read. The contributions and motivation are clearly put. In my opinion this paper is relevant to the people from Machine Learning for Healthcare domain. The problem of generating the clear PI is very relevant for the clinicians. The attempt of medical reports (not instructions) generation were done before, but most of them used other modalities (i.e images). Here authors provided the way to utilise only the text knowledge.\n2. Previous approaches are clearly explained and cited; experimental settings are well explained.\n3. With the help of Re$^{3}$Writer the performance was significantly increased (Table 2). Ablation studies clearly shows the impact of each component in the final result (Table 5).\n\nWeaknesses:\n1. More details about the PI dataset will be helpful: i.e did authors use discharge summaries? Also about the patients notes - they are sparse in time of the patient stay in hospital (i.e nursing and physicians notes, radiology reports), did author just concatenate all these notes, or did they perform the selection somehow?\n2. Additional ablation studies about what type of notes (nursing notes, physicians notes, radiology) contributes more in the Re$^{3}$Writer performance. However, it is a minor comment.\n3. Some relevant citations are missing, for example \"Auto-Encoding Knowledge Graph for Unsupervised Medical Report Generation\", Fenglin Liu, Chenyu You, Xian Wu, Shen Ge, Sheng wang, Xu Sun, NeurIPS'21. This paper (and not only this) also utilises the Knowledge Graph structure in order to generate the medical reports. Not the same idea, but it will be helpful for reader to see the full overview.\n 1. Will authors publish the PI dataset? Or it will be enough to use the provided code and instructions to generate it?\n2. Could authors provide more details about the dataset generation?\n3. Does eICU[*] dataset have the text notes? If yes, will it be possible to include the results on this data as well?\n\n*Tom J Pollard, Alistair EW Johnson, Jesse D Raffa, Leo A Celi, Roger G Mark, and Omar Badawi. The eICU collaborative research database, a freely available multi-center database for critical care research. Scientific data, 5(1):1–13, 2018. Limitations were discussed."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"OAnf_CHn_A",
"wQJFyglIT9m",
"VEChKQ5masm",
"ieD96g-LOlF",
"k40K1LyJS8",
"cMxZ9vWjEqo",
"h9brTxqaQDY",
"0DTHNxlQzlr",
"VEChKQ5masm",
"nips_2022_dp0zWsdOV1h",
"nips_2022_dp0zWsdOV1h",
"nips_2022_dp0zWsdOV1h"
] |
nips_2022_Z9ldMhplBrT | Rethinking the compositionality of point clouds through regularization in the hyperbolic space | Point clouds of 3D objects exhibit an inherent compositional nature where simple parts can be assembled into progressively more complex shapes to form whole objects. Explicitly capturing such part-whole hierarchy is a long-sought objective in order to build effective models, but its tree-like nature has made the task elusive. In this paper, we propose to embed the features of a point cloud classifier into the hyperbolic space and explicitly regularize the space to account for the part-whole hierarchy. The hyperbolic space is the only space that can successfully embed the tree-like nature of the hierarchy. This leads to substantial improvements in the performance of state-of-art supervised models for point cloud classification. | Accept | The paper presents a regularization for point cloud representation learning aiming to promote a part-whole hierarchy through a hyperbolic space. Most of the reviewers agree the idea of using the hyperbolic space is new and interesting. The experiment results seem to be sufficient. There was some confusion on how part is defined and compositionality in the paper. But the AC feels the paper has sufficient merit to be published It is required that the authors incorporate the reviewer feedbacks in the revised manuscripts. | train | [
"nC_4hYkyS0o",
"UezFn4AyY5f",
"6OFaqrpmbPK",
"Ku5kQ9N2dUt",
"EeMGid93iX",
"x69Vha6lIGO",
"0vULY7Etcx7",
"k6-EMUCcz0n",
"DLoXOa_WA1M",
"yxzgwQTuUeA",
"wQUhnW2inRY",
"7mG9TLBJjW",
"3GbteWpoZr73",
"fRm6ELeIzMj",
"fY9DiJPFCI",
"B_58HVlxahL",
"NcdjnjUsW68",
"LTPK8g1zaWK",
"PP4emTAJXDC"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We report the results of using partial shape augmentation to reinforce the DGCNN backbone. The training generated partial shapes with a random number of points to supplement the full shapes. As mentioned above, the improvement over the non-augmented baseline is not significant. However, we do observe improvements in the robustness tests, albeit not as large as those obtained by the proposed method. We will update the robustness results in the final paper.\n\nModelNet40 (OA%)\n\nDGCNN: 92.9\nDGCNN+aug.: 93.1\nDGCNN+HyCoRe: 93.7\n\n\nRobustness tests (new versions of Figs.5,6):\n\nhttps://ibb.co/kxL97gw\n\nhttps://ibb.co/Z8ZRQBm",
" I appreciate the further feedback!\n\nAgain, I have to reiterate that both points go back to testing against backbone networks reinforced by partial data. Fig 5,6 I presume are comparison against non-augmented backbones, so although impressive, has little evidence supporting that the proposal is not a mere data augmentation strategy.\n\nI will take these into account during the final disucussions.\n\nThank you for the feedback!",
" Regarding comparisons against using partial data as training samples, our preliminary experiments did not show any significant advantage in using this strategy. Exploiting them in the regularized way of EuCoRe/HyCoRe appeared to be the best course of action. However, we also remark that our comparisons are not purely against backbone models. As shown in Table 1, we compare against self-reconstruction [33], STRL [34], DCGLR [7], PointGLR [5] which are all self-supervised pretraining techniques that can be interpreted as generating more or less sophisticated \"augmentations\". These can be considered as the suggested data augmentation baseline.\n\nRegarding segmentation, as mentioned to reviewer 45jd, it could be possible to include some preliminary results replacing the classification head with the segmentation head, and regularization with the inverted hierarchy, but we feel this extends the scope of the paper too much and does not allow for a proper discussion of the differences between classification and segmentation in the page limits of a conference paper. The reviewer also, interestingly, mentions the need for robust techniques. Indeed, this was also a concern of ours and the experiments in Figs. 5 and 6 show how the proposed method provides substantial gains in presence of subsampled or partial input point clouds.",
" Thank you for clarifying the concerns!\nThe questions were mostly clarified by the rebuttal.\nI do however still have concerns regarding my points 1 and 3.\n\nRegarding point 1, about the method requiring extra effort:\nafter further reading, I agree with Reviewer r1Np that the comparison may not be completely fair.\nFrom my perspective, it seems that the proposed strategy is a way of data augmentation, therefore comparison with backbone models with the partial data also as training data should clarify the true advantages of the proposal. \n\nRegarding point 3 about segmentation:\nI find the reasons provided by the authors not too convincing. As the point cloud analysis is moving onwards to solving more challenging scenarios such as achieving rotation-invariance and noise-resistance, I believe the proposal require some versatility if it were to be proposed not to address these issues. It may not be in the interest of the authors, but can't the global feature be extracted by simply replacing the classification head with the pointwise segmentation head? Even thought there might not be much contribution technically, such results seem necessary to be presented as a point cloud analysis paper.",
" Thanks for the author's response! It has addressed my concerns. I will keep my rating as Accept.",
" Regarding the experiment in Table 3, we would like to remark that EuCoRe is also a novel regularization method for the classification problem based on the ideas presented in this paper. Table 3 just shows that implementing those ideas in the hyperbolic space is much more effective. We thus believe that the correct baseline is the published literature presented in the other Tables, more than a suboptimal implementation of our own contribution. Nevertheless, a 1 point improvement over our own EuCoRe is very significant on the difficult ScanObjectNN dataset, where improvements by state-of-the-art methods are often even smaller.\n\n\nWe agree with the reviewer that explicit compositionality in the sense of hierarchical structural inference or generative modeling is not our focus. We are ultimately interested in boosting the performance of a classifier via regularization, and it seems from experimental evidence that the \"straighforward kind of compositionality\", as described by the reviewer, we exploit is quite effective and, in fact, more effective than what has been studied in the literature and reported in our experiments tables. We would be glad if our paper pointed out this important phenomenon and further work focused on integrating more refined kinds of compositional models.\n\n",
" Thank the author for providing a detailed rebuttal!\n\nI still have some questions about the rebuttal. \n\n1. In Table 3, comparing with DGCNN-EuCoRe and DGCNN-HyCoRe, the improvement is only ~1% in average accuracy improvement across different dimensions. DGCNN-EuCoRe is the direct baseline for this paper, however, this improvement is not huge in the ScanObjectNN dataset. \n\n2. It's mentioned that \n> \"we want to exploit a hierarchy between whole objects and some kind of partial shapes that allows to induce better clusters for classification of the objects. In fact, the method does not provide any kind of part segmentation output.\" \n\n> \" This is what creates the property for which tracing the geodesic between two parts (or objects) at similar radius, results in passing through smaller parts that share a commonality of shape with the endpoints.\".\n\nThe major problem I see in this way is that the \"part\" defined in this paper only means spatially connected subgroups of the whole shape, and achieving this kind of compositionality is quite straightforward, e.g. sort the points along one axis, and gradually increase along this axis. It's also agreed with the authors that there's no explicit compositionality in the formulation. I don't think it is reasonable to claim the compositionality in the paper. ",
" >Limitations:\n\"The authors do mention the procedure of preparing the partial shapes to be the main limitation of the proposal. The method currently takes a random point and collects N′ nearest points to define one partial shape. Using segment data could improve the partial representation of the proposed feature space. Also, I believe lack of a framework to use the obtained feature for shape segmentation is also a drawback, as the authors go so far as to including parts of shapes as input to the proposed framework.\"\n\nWe also believe that segmentation is an important framework to be considered, and indeed we are currently working on this. As discussed in the replies to the other reviewers, we have found that the optimal regularization differs between classification and segmentation. However this study is in progress and will be reported in a separate paper.\nRegarding definitions of parts alternative to N’ spatial nearest neighbors, we agree that it could be interesting to test the use of segmentation labels. However, we opted not to do that to avoid extra labeling requirements and fit the standard classification setting. We are also exploring the possibility of defining parts as nearest neighbors in feature space in order to exploit self-similarities.",
" We thank the reviewer for their positive assessment of our work. In the following , we address the raised concerns on a point by point basis.\n\n> 1. “The training procedure seems to require effort...”\n\nWe thank the reviewer for this comment. We conducted a test to evaluate the computational time. Although this does not change in inference, the regularization in the training causes a delay. We estimate an incremental training time of the 60% over the training without regularization. We would also like to note that this time is still small when compared to self-supervised pretraining techniques, like contrastive learning, that require training for thousands of epochs and huge batch sizes.\n\n> 2.”It is doubtful whether the claimed hierarchy is actually achieved by the proposal...”\n\nAs we discussed in the response to the other reviewers, we can experimentally observe that the geometric placement of objects and parts approximately follows our description. This means that geodesics approximately embed tree distances. Moreover, while we do not require nor exploit semantic parts, the toy experiment performed for reviewer r1Np interestingly shows that even embeddings of semantic parts and compositions thereof follow the described hierarchy.\n\n> 3. “Despite the efforts to consider parts of shapes, it is disappointing that the authors do not include any point segmentation task.One can easily imagine using the feature as the global feature, which can be concatenated with conventional point-wise feature to conduct such task.”\n\nAs we discuss in the responses to other reviewers, we deliberately omit a discussion on part segmentation to focus on whole-object classification. The reason is that there are significant differences in the segmentation task, which lead to significant differences in methodology and presenting both tasks would not fit the limits of the paper. In particular, for the part segmentation task one would like the embeddings of the parts to lie close to the Ball boundary, so that they can cluster more effectively as they are used to directly derive the labels. This is in contrast with the object classification problem where we want the embeddings of the whole objects to be on the boundary. This means that, for part segmentation, the optimal hierarchy to be induced is the opposite of what is desired for classification. Fitting a thorough discussion on this matter with supporting analysis would be impossible within the constraints of the paper, so we chose to limit the scope to classification and present this in future work. We also do not believe that concatenating the global embedding z_whole to the point feature would carry relevant information for the part segmentation task, especially because part semantics are never used nor observed in the regularizer.\n\nQuestions:\n\n> 1. “I wondered if the partial point clouds would contaminate the feature space of the backbone network. Have the authors tried freezing the backbone networks so that the feature calculation backbone network is not affected by the triplet loss?”\n\nIn the Table 3 we train Hype-DGCNN, a DGCNN with hyperbolic projection without the regularizations. The model has a performance loss, and it improves in accuracy when independently adding one of the two regularizers and finally gets the best accuracy when both regularizers are used (Table 4). We do not think that freezing the backbone would be beneficial as this would prevent it from learning a feature extractor that follows the desired hierarchy.\n\n> 2.”I also wondered what would happen if the partial point clouds created in each epoch for the constraints were centralized.”\n\nWe thank the Reviewer for this interesting comment. We did not centralize the parts, but since this seems a good idea, we will perform an experiment to see if any improvements can be obtained. \n\n> 3. “If segmentation was conducted in the suggested way in the section above, how would the results be? It is a shame that partial information is only utilized to map the point clouds in the hyperbolic space. All the efforts to include such data do not seem to be fully utilized.”\n\nWe want to remark that in our work we did not use labeled parts to avoid extra labeling requirements and work within the standard framework for object classification. For more detail please see the previous response regarding why we have not addressed segmentation.. \n\n> 4. “Have the authors attempted to use other sets of data for visualization? ”\n\nWe thank the reviewer for this constructive suggestion. We did not include this kind of data for visualization, but we could add some examples of this type in the final version.\n",
" We thank the reviewer for their positive assessment of our work. In the following , we address the raised concerns on a point by point basis.\n\n> 1. “Such definition of the hierarchy looks suspicious to me, as there shouldn’t be an unique sequential order / hierarchy to define how an object instance is composed in piecewise order, and it makes less sense to require different objects correspond to a single “common part ancestor””\n\nWe agree with the reviewer that there could exist a multiplicity on how an object instance is composed in piecewise order, but the same multiplicity could exist even in the opposite hierarchy, e.g. an object ancestor could be split in leaves by parts that belong to other objects. As discussed in the response to other reviewers, this apparently unintuitive hierarchy is suited to exploiting the properties of the hyperbolic space for the whole point cloud classification problem we are tackling.\nThe idea of using this kind of hierarchy is that we want the final leaves of the tree, i.e. the whole objects to nicely cluster according to the different classes. To do this, we embed small parts with generic shape near the centre (where the poincare ball has “limited volume”) and more and more specific parts or whole objects near the ball boundary (where the volume has increased exponentially as function of radius and allows comfortable clustering). The method that ensures this propagation is the variable margin in Equation (5). We remark that we do not require nor exploit a semantic definition for the parts, as we do not attempt to construct a model for part segmentation or a constructive generative process. Nevertheless, the toy experiment we performed for reviewer r1Np shows that even embeddings of semantic parts and compositions thereof follow the claimed hierarchy. Finally, we remark that we do not address part segmentation, as it would require a lengthy discussion and more analysis that cannot fit the length of the paper, but that task would require the opposite hierarchy to be optimally regularized (also see response to reviewer 45jd). \n\n> 2. “Also, the way the author designed the contrastive and hierarchical loss (equation 5) is not fully justified by the author’s definition of part-whole hierarchy. ”\n\nEquation (5) can promote different levels of embeddings from the center to the edge thanks to variable margin γ/N’ where N’ is the number of points in the part. Being dependent on N’, when a small part is embedded in the feature space, the margin is stronger and enforces the difference between the two norms to be large, landing the part near to the centre while the whole object is always pushed on the edge by the other regularizer (Equation (6)).On the contrary when the part is bigger the margin is weaker and the part can be far from the centre but above the object. The variability of N’ guarantees a continuous hierarchy from the center to the edge according to the part size. Notice that we use part size N’ as a proxy for part specificity that does not require semantic labels.\n\n> 3. “Overall, I feel “hierarchy” may not be a good explanation to what the authors actually did. Probably just part-whole contrastive learning (without the hierarchical part) is more appropriate.”\n\nWe use the term hierarchy since the hyperbolic triplet loss was used in other works to define the tree structure for various data (images, words…) with the same claim to embed this hierarchy in a continuous space. We agree that there are similarities with a part-whole contrastive learning but a main concept differentiates our work from it. The variable margin guarantees different parts to be located at different levels in the space, while the contrastive loss between parts and whole objects ensures the objects to be located on the edge and be connected by ancestor. We can also see in the ablations in Table 4 that the hierarchy regularizer synergizes well with the contrastive learning. A pure contrastive approach would not perform as well (third row of Table 4).\n\n\nQuestions:\n\n> 1. “further discussion of the definition of the part-whole hierarchy and its difference to object-part hierarchy usually defined in literature.”\n\nAnswered and commented in point 1 above\n\n> 2. “comments on if Rhier is sufficient to regularize hierchy with multiple levels, and also supply the result of DGCNN+HyCoRe without Rhier”\n\nAnswered and commented in point 2 above. The requested DGCNN+HyCoRe without Rhier is reported in the third row of Table 4. It is true that Rhier by itself can only marginally improve the baseline but it synergizes well when combined with Rcontr so that 1.0 AA points are gained by the synergy with respect to DGCNN+HyCoRe without Rhier.\n\n> 3. “improve visualization of figure 4, give more visual example of the pointcloud & embedding pairs.”\n\nWe thank the reviewer for this suggestion. We will do our best to improve the visualization of figure 4 and add more examples of objects pairs in the final version of the paper.\n\n",
" We thank the reviewer for their positive assessment of our work. In the following , we address the raised concerns on a point by point basis.\n\n> 1.”Since hyperbolic learning is new to the point cloud analysis community, more insights on the technical part...”\n\nWe thank the reviewer for this constructive comment. We will add more insights and explanations on the hyperbolic learning in the final version of the paper, highlighting how the new projections on non Euclidean spaces can be beneficial for point cloud data.\n\n> 2. “...However, the result section only shows it is applied to a small subset of methods that are compared...”\n\nThe architectures to which we applied the proposed method have been chosen with a clear rationale. First, PointNet++ has been one of the first architectures to process point clouds and it is still widely used due to its simplicity and relatively low computational complexity. Then, DGCNN is most important architecture belonging to the class of graph neural networks and it is widely used as a benchmarking baseline. Finally, PointMLP was the state-of-the-art for point cloud classification at the time of writing and serves the purpose of demonstrating that the proposed technique can further enhance even the best performing model.\nDuring the revision period, a new architecture (PointNext) has been posted as a preprint, further improving the state of the art. We tested our method on PointNext, still confirming relevant gains as reported below, and confirming the generality of our approach. The following results have been obtained on ScanObjectNN PointNext OA 88.0%, PointNext+HyCoRe OA 88.3%. PointNext AA 86.4%, PointNext+HyCoRe AA 87.0%. This result would place PointNext+HyCoRe as the state-of-the-art for the challenging ScanObjectNN dataset.\n\nQuestions\n> 1. “It is not clear to me why Equation (5) can promote part embeddings to lie closer to the center while whole embeddings to the edge...”\n\nEquation (5) can promote different levels of embeddings from the center to the boundary thanks to variable margin γ/N’ where N’ is the number of points in the part. Being dependent on N’, when a small part is embedded in the feature space, the margin is stronger and enforces the difference between the two norms to be large, landing the part near the center while the whole object is always pushed on the boundary by the other regularizer (Equation (6)). On the contrary, when the part is bigger the margin is weaker and the part can be far from the center but above the object. The variability of N’ guarantees a continuous hierarchy from the center to the boundary according to the part size.\n\n> 2. “Why hierarchy learning can be beneficial to the task of classification? In the end, only the whole shape is used for classification, and the parts are not used as the input. “\n\nThe idea is that building a tree structure from generic shapes to specific parts/objects induces the leaves/whole-objects to be more separate according to the parts they are composed of, eventually leading to better clustering of the objects. The benefits of HyCoRe are also visible in figures 5 and 6 where the network regularized by HyCoRe is more robust to input subsampling or partialization.\n\n> ”What is the advantage of the proposed method over a naive method that simply applies contrastive learning on the feature space of the whole shape?”\n\nThe advantage of the proposed method is strongly related to exploiting the tree likeness of hyperbolic space. Simple contrastive learning in hyperbolic space can lead to vectors spreading out and accumulating on boundary, also causing numerical problems. Instead, the regularization in Equation (5) prevents this phenomenon and helps to connect detailed parts with the corresponding objects and general parts at the root of different objects. We think and show some visual examples that the induced regularization generates better clusters among classes. This is also confirmed by the results in Table 4 which highlights the synergies between Rhier and Rcontr.\n\n>Limitations\n\nWe do not see a clear way for the work to have negative societal impact. Regarding limitations, as discussed with the other reviewers, the fact that the method cannot simultaneously be optimal for classification and part segmentation can be a limiting factor, as it requires a different regularizer. However, the study on part segmentation is still in its infancy and due to the complexity of the matter, part segmentation is outside the scope of this work.",
" > 3. “The comparison is not a fair comparison. I appreciate the paper compared with multiple network architectures and see the improvement over compared baselines. However, since the main idea is proposing a regularization loss in the hyperbolic space, a fair comparison should be apply the same regularization loss, but in the Euclidean space. The compared results is only using supervised loss for pretraining and fine-tuning with the classification loss, what about using both losses for fine-tuning? It was mentioned in Line 95 that \"they are also mostly unable to improve upon state-of-the-art supervised methods when finetuned with full supervision\", but there's no evidence to support this. \"\n\nThe compared results is only using supervised loss for pretraining and fine-tuning with the classification loss, what about using both losses for fine-tuning? It was mentioned in Line 95 that \"they are also mostly unable to improve upon state-of-the-art supervised methods when finetuned with full supervision\", but there's no evidence to support this. ”\nWe agree with the reviewer that it is important to test the effectiveness of the regularization loss in the Euclidean space. This is done in Table 3 where EuCoRe is the same regularization loss but in the Euclidean space. As it can be noticed, the Euclidean space provides only a small improvement.\nRegarding the comparisons with other models, we consider methods as they have been presented in the literature. Several of the techniques we report in the “finetuned” category have been proposed in the context of self-supervised learning and exhibit strong performance in that setting. However, once labels are added and the self-supervised pretraining is finetuned, they do not significantly outperform their counterparts trained without the self-supervised schemes. This is the claim we make at Line 95, and it is supported by the results in Table 1. Specifically, we can see that Self-Recon [33], STRL [34], DCGLR [7] are unable to improve the performance of DGCNN by more than 0.3 OA points and PointGLR [5] only improves PointNet++ by 0.1 points. These results are reported by the original authors themselves in the usual pretraining followed by finetuning configuration, and we assumed that if their methods worked better by using both losses for finetuning as suggested by the reviewer, the original authors would have reported it.\n\n> 4. “Since the paper is proposing to learning compositionality with part-whole hierarchy, a more convincing experiment would be running with a dataset that contains part-whole hierarchy (e.g. PartNet [a]), and provide a quantitative analysis on the learnt hierarchy v.s. other baselines that learns part whole hierarchy.”\n\nWe would like to remark that our goal is not to learn part-whole hierarchy in the semantic sense that a generative model or a part segmentation model would do. Rather, we want to exploit a hierarchy between whole objects and some kind of partial shapes that allows to induce better clusters for classification of the objects. In fact, the method does not provide any kind of part segmentation output. \nWe performed an experiment that follows the spirit of the reviewer’s suggestion to highlight the properties of our regularized space. We consider the embeddings of semantic parts, as extracted from ShapeNet Parts using the part labels. We progressively combine semantic parts, embed them and check where they land in the hyperbolic space by measuring the hyperbolic norm of the embedding (distance from Ball center). This is presented in the following table. We can see that as parts get progressively combined to form the whole object, the norm increases, thus moving the embedding from the center of the ball towards the edges in the typical fashion of embedded hierarchies.\n\nLegend: Part or composition of parts (hyperbolic norm)\n\nTable (5.32)\nPlane+uprights (4.56)\nLegs+uprights (2.08)\nPlane (4.07)\nLegs (2.05)\nUprights (1.99)\n\nAircraft (4.98)\nWings+tail+engines (4.56)\nWings+tail (4.45)\nFuselage+tail (4.23)\nEngines (4.44)\nWings (4.22)\nFuselage (3.37)\nTail (2.94)\n\n\n> 5. (minor point) In the Fig. 4 visualization, it seems the clustering is not great?\n\nFigure 4 is a qualitative 2-d visualization of a 256-d dimensional hyperbolic feature space. These visualization techniques can be quite unstable due to the aggressive dimensionality reduction in the hyperbolic space, so they must be used only for a high-level qualitative overview. In this case, it is mostly useful to see that larger dots are closer to the boundary, that there is some clustering and that some small dots seem to bridge multiple large dots clusters.\n",
" We thank the reviewer for their interest in our work. In the following the address the raised concerns on a point by point basis.\n\n> 1.“The definition of part-whole hierarchy seems controversial. In the paper, the part is defined as the ancestor of the whole object, and different object can share same ancestor...”\n\nWe agree with the reviewer that our definition of hierarchy might appear unusual but it is well-motivated by considering the properties of the hyperbolic space. It is also tied to the specific problem we address in this paper, i.e., whole point cloud classification rather than part segmentation, for which, in fact, the optimal hierarchy in the hyperbolic space is the one mentioned by the reviewer and subject of our future work. To see this we report part of the response to reviewer 45jd. First, we notice that the Poincarè Ball model of hyperbolic space owes its effectiveness in our scheme to the exponential growth of the space volume as a function of the radius. This means that embeddings of things that are used to directly estimate the task labels should be placed close to the boundary of the Ball, as this is where they will be able to cluster better. In the case of whole point cloud classification, our estimated label is derived from the embedding of the entire point cloud, so this is what we aim at placing on the boundary. At the same time, the working of the hyperbolic space when embedding a hierarchy, is such that elements at higher levels of the hierarchy sit closer to the Ball center so that the geodesic distance approximates the tree-distance. This is why for the classification task considered in this paper, the natural hierarchy to consider is that of proto-parts with common shapes serving as ancestors of multiple objects. For part segmentation, the ideal hierarchy is flipped, as we desire the part embeddings to sit close to the Ball boundary and whole objects serving as common ancestors of their parts.\nIn an ablation study that we did not report for reasons of brevity, we considered the flipped hierarchy suggested by the reviewer which resulted in poor performance. Indeed, this seems to go against the natural tendency of the model to follow the hierarchy we discussed in the paper.\n\n> 2. “ Overclaiming. I don't agree with the author the paper is learning to \"promotes the part-whole hierarchy of compositionality in the hyperbolic space\" Specifically, the only thing the paper is proposing a regularization on the feature space, therefore how the object is composed with different part is not clear, there's no explicit way of representing object as a hierarchical tree using the proposed method, not mentioning the compositionality. Besides, the way the paper is defining \"parts\" is through subsampling a small local region of the object (Line 188), this is not a semantic part and is only a subgroup of the object, and also the definition of part-whole hierarchy is controversial as above\"\n\nIn this work we augment point cloud classification architectures by proposing the hyperbolic embedding and a regularization specific to this space. Through the regularization, we impose that, as parts get larger, they become more specific to the corresponding object, while general parts can connect different classes of object. We believe that part size can be a reasonable proxy for specificity. We agree with the reviewer that there is not an explicit reasoning on compositionality, However, if we think of starting with a small part and then progressively adding points, we are moving from the center of the Poincarè Ball towards the boundary. This is what creates the property for which tracing the geodesic between two parts (or objects) at similar radius, results in passing through smaller parts that share a commonality of shape with the endpoints. \nWe also agree that we are not explicitly interested in semantically meaningful parts. A “part” as considered by our own framework does not need to be its own semantic entity in order to provide the desired effect. In other words, a semantically well-defined leg is not necessarily more useful than a blob of points capturing a piece of leg and a piece of plane which may occur in chairs, tables, etc.. The lack of need for semantically-defined parts is also a feature of the method, as it limits labeling requirements. We are in fact ultimately interested in whole point cloud classification, not part segmentation, nor generative part-based modeling. Nevertheless, it could be interesting to explore if including semantic part labels in the definition of parts has an effect on performance.\n\n",
" > Is it easy to extend this approach to semantic segmentation task? If yes, why did authors not include those experiments? If not, can authors include some discussion on this matter.\n\nWe thank the reviewer for their positive comments on our work. We agree with the reviewer that segmentation would be an interesting task to consider and, in fact, we are actively working on it. However, we have found that there are substantial differences between the optimal way to regularize the tasks of whole point cloud classification and part segmentation. This deserves a rather lengthy discussion and analysis which cannot fit a single paper, so we decided to only focus on classification for this work.\n\nThe reason why regularizing part segmentation is different from regularizing classification is found in the need to impose opposite hierarchies in the hyperbolic space. This also addresses the comments from other reviewers observing our unusual definition of hierarchy. First, we notice that the Poincarè Ball model of hyperbolic space owes its effectiveness in our scheme to the exponential growth of the space volume as a function of the radius. This means that embeddings of things that are used to directly estimate the task labels should be placed close to the boundary of the Ball, as this is where they will be able to cluster better. In the case of whole point cloud classification, our estimated label is derived from the embedding of the entire point cloud, so this is what we aim at placing on the boundary. At the same time, the working of the hyperbolic space when embedding a hierarchy, is such that elements at higher levels of the hierarchy sit closer to the Ball center so that the geodesic distance approximates the tree-distance. This is why for the classification task considered in this paper, the natural hierarchy to consider is that of proto-parts with common shapes serving as ancestors of multiple objects. For part segmentation, the ideal hierarchy is flipped, as we desire the part embeddings to sit close to the Ball boundary and whole objects serving as common ancestors of their parts. Indeed, in our experiments we observed that these two hierarchies emerge naturally just by moving to the hyperbolic space without any regularizers forcing them (albeit they are very rough without explicit regularization). Our preliminary results on part segmentation suggest that substantial gains are also possible on that task by following its optimal hierarchy. However, as mentioned, this will be subject of future work. We will briefly summarize this discussion in the conclusions section of the final version of the paper.",
" The paper proposes learning shape representation for the 3D shape classification task, \nby regularizing embeddings in hyperbolic space. The intuition on which this paper is based\nis that complex shapes can be made by combining simpler parts and this composition can\nbe explained by a tree-like hierarchy. The paper proposes regularizing shape embeddings\nsuch that simplest and most basic parts are embedded at the root level and entire shapes\nare embedded at the leaf level, where embeddings are defined using hyperbolic space.\nSpecifically, a shape embedding is first mapped to hyperbolic space and then a mobius layer\nis applied to project it to Poincare ball. The paper proposes two regularization, first is to \nencourage the whole shape embeddings to be close to leaf level and part level embeddings\nare close to the root. The second regularization encourages a shape and its parts\nto be closed to each other in embedding space and far from embeddings of parts from different\nshapes.\nThis approach consistently improve the performance of shape classification task on several\nneural network architectures. **Strengths**\n1. The paper is clearly written and explains the motivation of the proposed approach well.\n2. The proposed approach is well described\n3. All experiment details are provided ensuring reproducibility.\n4. The proposed approach consistently improve the performance using several neural network architecture, ensuring generalizability.\n5. All ablations are provided that all components are pertinent.\n**Weakness**\n1. The main weakness I see is that only shape classification is chosen to benchmark the approach. Specifically, since the embeddings are better aware of the composition the embeddings can shine in part segmentation task. 1. Is it easy to extend this approach to semantic segmentation task? If yes, why did authors not include those experiments? If not, can authors include some discussion on this matter. Yes.",
" The paper proposed to utilize hyperbolic space to learn part-whole hierarchy for 3D point clouds. The main idea is regularizing in the hyperbolic space if part or whole are representing same object, and if they are from different objects, the regularization is pushing them to larger distance (similar to triplet loss, or contrastive loss). The paper applied the proposed regularization to many different network architecture (e.g. DGCNN, PointNet++, PointMLP, etc) and improved the baseline across two datasets (ModelNet40 and ScanObjectNN). Strength:\n\n1. Utilizing hyperbolic space for regularizing part-whole hierarchy is a new idea. The geodesic distance in the hyperbolic space naturally suitable for the tree structure of part-whole hierarchy and defining the regularization on the part-whole space makes more sense comparing with defining on the Euclidean space. (although I have some questions below)\n\n2. The proposed regularization is agnostic to different network architectures, and the paper experimented on multiple different backbone for point cloud classification, each achieved improvement over compared baselines on two dataset. \n\n3. The paper is well written and easy to understand. \n\nWeakness:\n\n1. (minor point) The definition of part-whole hierarchy seems controversial. In the paper, the part is defined as the ancestor of the whole object, and different object can share same ancestor. This is not intuitive, as one object is composed of multiple **different** parts, that means, if looking from the path from part to whole, it's not a tree structure (one child node is strictly below one parent node). An intuitive way of defining part-whole hierarchy is the whole is the ancestor of the parts and parts can be share with many different objects. This is also how PartNet [a] (A dataset for part-whole hierarchy for 3D object) is created, and also the part-whole hierarchy mentioned in the seminal work [b]. \n\n2. Overclaiming. I don't agree with the author the paper is learning to \"promotes the part-whole hierarchy of compositionality in the hyperbolic space\" Specifically, the only thing the paper is proposing a regularization on the feature space, therefore how the object is composed with different part is not clear, there's no explicit way of representing object as a hierarchical tree using the proposed method, not mentioning the compositionality. Besides, the way the paper is defining \"parts\" is through subsampling a small local region of the object (Line 188), this is not a semantic part and is only a subgroup of the object, and also the definition of part-whole hierarchy is controversial as above\n\n3. The comparison is not a fair comparison. I appreciate the paper compared with multiple network architectures and see the improvement over compared baselines. However, since the main idea is proposing a regularization loss in the hyperbolic space, a fair comparison should be apply the same regularization loss, but in the Euclidean space. The compared results is only using supervised loss for pretraining and fine-tuning with the classification loss, what about using both losses for fine-tuning? It was mentioned in Line 95 that \"they are also mostly unable to improve upon state-of-the-art supervised methods when finetuned with full supervision\", but there's no evidence to support this. \n\n\n4. Since the paper is proposing to learning compositionality with part-whole hierarchy, a more convincing experiment would be running with a dataset that contains part-whole hierarchy (e.g. PartNet [a]), and provide a quantitative analysis on the learnt hierarchy v.s. other baselines that learns part whole hierarchy. \n\n\n5. (minor point) In the Fig. 4 visualization, it seems the clustering is not great? If we only focus on the light green color, they are spread in many regions in the graph, does this mean the clustering of light green class is not great?\n\n[a] PartNet: A Large-scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding \nKaichun Mo, Shilin Zhu, Angel X. Chang3, Li Yi, Subarna Tripathi, Leonidas J. Guibas, Hao Su\n\n[b]How to represent part-whole hierarchies in a neural network\nGeoffrey Hinton\n See Weakness section above. See weakness section above. ",
" This paper presents a method for promoting the part-whole hierarchy in the learned feature space. In particular, it proposes to embed the features of a point cloud encoder into hyperbolic space. The part-whole hierarchy is enhanced through an explicit regularizer. Such a key idea is backed by the theory that the hyperbolic space (space with negative curvature) is the only space that can embed tree structures with low distortion. A regularizer layer is proposed for supervised training of point cloud classification models. It can be applied to existing architectures with a simple modification. Performance boost across a number of popular architectures is reported in the experiments. ### Strengths\n\n* The idea of promoting a part-whole hierarchy in the hyperbolic space provides a novel perspective for learning discriminative features of the point cloud, which I believe is beneficial to the community. Though it is only verified in the task of classification, I think it could be valuable for more tasks that require in-depth analysis of part hierarchies, e.g. matching incomplete point cloud, point cloud generation from parts, etc. \n\n* The experiments have shown steady performance improvement by applying the proposed method to existing mainstream backbones.\n\n* Nice visualization of the formed feature space for a better understanding of the effectiveness of the proposed approach.\n\n* Code is provided for reproduction. \n\n### Weaknesses\n\n* Since hyperbolic learning is new to the point cloud analysis community, more insights on the technical part should be given to assist the audience to comprehend why the proposed method could work as expected. I would point out what should be improved in the questions section.\n\n* The paper claims the proposed method can be applied to any existing architecture. However, the result section only shows it is applied to a small subset of methods that are compared. This makes me wonder if the results are cherry-picked. Can it provide a performance boost to arbitrary architecture? * It is not clear to me why Equation (5) can promote part embeddings to lie closer to the center while whole embeddings to the edge. The main reason is that Equation (5) only specifies constraints on the distance between the embedding of whole and part. It only encourages the part and the whole feature to stay apart. Why the part feature (instead of the whole feature) is guaranteed to be pushed to the center? What is the technical insight behind it?\n\n* Why hierarchy learning can be beneficial to the task of classification? In the end, only the whole shape is used for classification, and the parts are not used as the input. What is the advantage of the proposed method over a naive method that simply applies contrastive learning on the feature space of the whole shape? No limitations or potential negative societal impact are provided in the paper. I would recommend the authors provide some failure cases or limitations in the rebuttal (if any). ",
" This paper proposes to embed the features of a point cloud classifier into the hyperbolic space and explicitly regularize the space to account for the part-whole hierarchy. To do this, it employs a hyperbolic neural network [10] and introduces losses to regularize the part-whole hierarchy.\n Strengths:\nIt seems the proposed use of hyperbolic neural network with regularization is able to improve accuracy of different point cloud classification backbones. The experiments demonstrated the benefit of the proposed work. \n\n\nWeakness:\n\nThe intuition of the proposed method looks handwavy to me. \n\nIn literature, when talking about part-whole hierarchy, it almost always refers to a single object consists of multiple subparts, and the subparts further dissolve into smaller parts. The parts need not to be the same and most times they are assumed to be a mixture of different parts. However, in this work, it is in an unusual opposite direction, i.e. different objects share a single “atom” part and the full objects are “grown” piece by piece in sequential order (as shown in figure 2). Such definition of the hierarchy looks suspicious to me, as there shouldn’t be an unique sequential order / hierarchy to define how an object instance is composed in piecewise order, and it makes less sense to require different objects correspond to a single “common part ancestor”. This is in contrast to embedding WordNet (in the original hyperbolic neural network paper) where the word hierarchy is uniquely defined according to categorical relationships. \n\nAlso, the way the author designed the contrastive and hierarchical loss (equation 5) is not fully justified by the author’s definition of part-whole hierarchy. The loss in equation 5 only enforces the embedding of the whole object to differ from the embedding of parts, in other words it only distinguishes the last level of hierarchy against the rest levels, but there is no regularization to distinguish subparts between intermediate levels. Looking at the ablation in Table 4, it does seem that the hierarchical loss make little difference to baseline. Could the author also supply a version of DGCNN+HyCoRe without $R_\\text{hier}$? The visualization in Figure 4 is not that helpful either, it is hard to tell the size of each points when everything is densely packed, and not straightforward to see the quality of the learned “part-whold hierarchy” other than the clustering.\n\nOverall, I feel “hierarchy” may not be a good explanation to what the authors actually did. Probably just part-whole contrastive learning (without the hierarchical part) is more appropriate. \n\n\n (1) further discussion of the definition of the part-whole hierarchy and its difference to object-part hierarchy usually defined in literature.\n\n(2) comments on if $R_\\text{hier}$ is sufficient to regularize hierchy with multiple levels, and also supply the result of DGCNN+HyCoRe without $R_\\text{hier}$\n\n(3) improve visualization of figure 4, give more visual example of the pointcloud & embedding pairs. \n\nSee discussions above for explanations. I could not find a limitation section in the draft.",
" The authors present a novel representation for latent space of point clouds to capture and take advantage of the hierarchical nature of the underlying shapes. The proposed method encourages point clouds to be projected to a hyperbolic space, where common simple structures are forced to the center of the space, and the complex shapes are moved to the edge.\n\nTo achieve this they propose a module that can be added onto existing frameworks. The module takes the feature vector of a set of points representing a shape and projects it to a Poincarè ball. To achieve the proposed hierarchical representation, they also extract partial point cloud from the same shape and other shape as input and use it in the regularization term. The constraints push the partial point clouds to the center, while keeping the parts of the same point cloud close and pushing back parts from other point clouds. \n\nThe regularization strategy establishes a more effective latent feature space, as seen from the classification accuracies on the benchmark datasets ModelNet40 and ScanObjectNN, as well as from the visualization of the resulting space, which shows features from different classes are kept apart. ### Strengths\n\nThe representation is very novel and interesting, as it attempts to take advantage of the hyperbolic space in order to establish hierarchical relationships that starts from common simple parts to wholistic characteristic shapes. The authors thoroughly explain the geometric background of the proposal, and define a sound and practical constraints that drives data to be in the desired respective positions in the hierarchy.\n\nThe training procedure of preparing partial shapes to establish this hierarchy is also unique. The resulting feature space, as illustrated in Fig. 4, suggests that the constraints on the feature space have effectively mapped the latent features of even partial data to the desired locations.\n\nThe fact that the proposed module can be placed on top of existing point cloud analysis backbone networks is a huge advantage. By employing this representation, the conventional methods gain massive boosts in classification accuracy, which can be observed from Tables 1 and 2. \n\n### Weaknesses\n\nThe training procedure seems to require effort. From the text, the partial shapes have to be prepared at each epoch just to calculate the triplet loss defined in the paper. As the process is rather random, we can easily imagine the training process taking much longer than most existing frameworks. The burden of the training process seems to be excluded in the experimental section. \n\nIt is doubtful whether the claimed hierarchy is actually achieved by the proposal. The colormap in Fig. 4 seems to be well-organized near the edge of the ball, but seems rather random near the middle. The interpolation results in Fig.2 of the appendix, despite the efforts to incorporate various partial shapes in the training phase, also isn't as convincing as desired. They are definitely smaller in size, but the claimed shape commonality is difficult to observe. \n\nDespite the efforts to consider parts of shapes, it is disappointing that the authors do not include any point segmentation task, which is deeply related to local information of shapes, in the evaluation. One can easily imagine using the $\\mathbf{z}_{whole}$ feature as the global feature, which can be concatenated with conventional point-wise feature to conduct such task. I wondered if the partial point clouds would contaminate the feature space of the backbone network. Have the authors tried freezing the backbone networks so that the feature calculation backbone network is not affected by the triplet loss?\n\nI also wondered what would happen if the partial point clouds created in each epoch for the constraints were centralized. It would seem to lead to a more robust common part ancestor, but leading to difficulty in establishing hierarchy between whole and partial shapes.\n\nIf segmentation was conducted in the suggested way in the section above, how would the results be? It is a shame that partial information is only utilized to map the point clouds in the hyperbolic space. All the efforts to include such data do not seem to be fully utilized.\n\nHave the authors attempted to use other sets of data for visualization? Although slightly out of context, it may have been better to use a more simple set of targets for the visualization task, such as humans in different poses. Human models have parts that can be used as the partial data, and commonality and transformation are easier to observe. The authors do mention the procedure of preparing the partial shapes to be the main limitation of the proposal. The method currently takes a random point and collects $N'$ nearest points to define one partial shape. Using segment data could improve the partial representation of the proposed feature space. \n\nAlso, I believe lack of a framework to use the obtained feature for shape segmentation is also a drawback, as the authors go so far as to including parts of shapes as input to the proposed framework."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4,
4
] | [
"UezFn4AyY5f",
"6OFaqrpmbPK",
"Ku5kQ9N2dUt",
"k6-EMUCcz0n",
"wQUhnW2inRY",
"0vULY7Etcx7",
"7mG9TLBJjW",
"DLoXOa_WA1M",
"PP4emTAJXDC",
"LTPK8g1zaWK",
"NcdjnjUsW68",
"3GbteWpoZr73",
"B_58HVlxahL",
"fY9DiJPFCI",
"nips_2022_Z9ldMhplBrT",
"nips_2022_Z9ldMhplBrT",
"nips_2022_Z9ldMhplBrT",
"nips_2022_Z9ldMhplBrT",
"nips_2022_Z9ldMhplBrT"
] |
nips_2022_ZuSiW0EixjX | Redistribution of Weights and Activations for AdderNet Quantization | Adder Neural Network (AdderNet) provides a new way for developing energy-efficient neural networks by replacing the expensive multiplications in convolution with cheaper additions (i.e., L1-norm). To achieve higher hardware efficiency, it is necessary to further study the low-bit quantization of AdderNet. Due to the limitation that the commutative law in multiplication does not hold in L1-norm, the well-established quantization methods on convolutional networks cannot be applied on AdderNets. Thus, the existing AdderNet quantization techniques propose to use only one shared scale to quantize both the weights and activations simultaneously. Admittedly, such an approach can keep the commutative law in the L1-norm quantization process, while the accuracy drop after low-bit quantization cannot be ignored. To this end, we first thoroughly analyze the difference on distributions of weights and activations in AdderNet and then propose a new quantization algorithm by redistributing the weights and the activations. Specifically, the pre-trained full-precision weights in different kernels are clustered into different groups, then the intra-group sharing and inter-group independent scales can be adopted. To further compensate the accuracy drop caused by the distribution difference, we then develop a lossless range clamp scheme for weights and a simple yet effective outliers clamp strategy for activations. Thus, the functionality of full-precision weights and the representation ability of full-precision activations can be fully preserved. The effectiveness of the proposed quantization method for AdderNet is well verified on several benchmarks, e.g., our 4-bit post-training quantized adder ResNet-18 achieves an 66.5% top-1 accuracy on the ImageNet with comparable energy efficiency, which is about 8.5% higher than that of the previous AdderNet quantization methods. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/AdderQuant. | Accept | The reviewers were mostly positive about this paper [8,6,6,4], while the negative reviewer did not update the review or respond after the author's response. I do not see any major issues remaining. The suggested method seems interesting, novel, and achieves good empirical results. | train | [
"EHI2TjL368",
"dDYeqb5esCg",
"gJJ35xuiukx",
"aI3sMQJgxrY",
"xn2v6JCgIqi",
"4blwLIxn_gO",
"uCm49e8MC6O",
"mhvJdh4d-_e",
"16elRJ5NSKq",
"56aLZhvEFfB",
"Fw55hZ1lwor"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors' response has solved all of my concern, I will keep my rating about this work.",
" We would like to sincerely thank the reviewer for providing a constructive review and detailed comments.\n\n**Q1:** Better to compare the accuracy drops in quantized CNNs as well as the currently presented accuracy drop in AdderNet.\n\n**A1:** Thanks for the nice suggestion. We adopt the BRECQ PTQ method and further conduct the PTQ experiments on CIFAR-100 dataset with CNN ResNet-20 architecture. The comparison of accuracy drops are reported in the following table:\n\n| | W8A8 (%) | W6A6 (%) | W5A5 (%) | W4A4 (%) |\n| :-: | :-: | :-: | :-: | :-: |\n| CNN | +0.02 | +0.02 | -0.11 | -1.74 |\n| AddNN | -0.01 | +0.04 | -0.13 | -1.83 |\n\nThe corresponding energy consumptions are also calculated under various bits, and the results are reported below:\n\n| | W8A8 ($\\mu$J) | W6A6 ($\\mu$J) | W5A5 ($\\mu$J) | W4A4 ($\\mu$J) |\n| :-: | :-: | :-: | :-: | :-: |\n| CNN | 9.47 | 7.11 | 4.13 | 2.75 |\n| AddNN | 2.55 | 2.02 | 1.65 | 1.33 |\n\nFrom the above comparisons of accuracy drop and energy consumption, AdderNets achieve a comparable accuracy drop with CNN, but with noticeable lower energy consumption.\n\n**Q2:** It is not clear whether the proposed methods can be generalized to other neural networks with similar distribution properties.\n\n**A2:** ShiftAddNet is another impressive multiplication-less neural network that involves shift and add operations. The weights and activations of ShiftAddNet exhibit similar distributions to those of AdderNet. We apply the proposed method to quantize the add operation in ShiftAddNet ResNet-20 on CIFAR-100 dataset. Specifically, the number of groups we use is 4, the hyper-parameter $\\alpha$ controlling the ratio of discarded outliers in activations is set to 0.999. In addition, only PTQ is adopted within limited time. The accuracy drop under various bits is reported below:\n\n| W8A8 (%) | W6A6 (%) | W5A5 (%) | W4A4 (%) |\n| :-: | :-: | :-: | :-: |\n| +0.02 | -0.03 | -0.41 | -1.25 |\n\n**Q3:** How about other downstrem tasks, e.g., detection, segmentation. Or other network architectures, e.g., ViTs with adder kernels.\n\n**A3:** Thanks for the suggestions. We have performed the experiment on detection task in L266-L272 in the paper. Specifically, the mAP of 8-bit quantized Adder FCOS on COCO val2017 dataset with our quantization method is 36.5, which is 0.3 higher than the quantization method in AdderDet[1].\n\nAs for the segmentation task, as far as we know, there is no work on applying full-precision AdderNet to segmentation task so far. Since our work is focused on the quantization of AdderNet, it is difficult for us to first train a full-precision AdderNet on segmentation task with limited time and no successful precedent. However, we think this is a good topic that can be explored in the future.\n\nAs for other network architectures, there is indeed a very interesting work, i.e., AdderViT[2]. We reproduce the full-precision adder DeiT-T on ImageNet dataset following AdderViT[2]. Specifically, to save training time, we only train 400 epochs from scratch instead of the 600 epochs in AdderViT. The top-1 accuracy of our full-precision adder DeiT-T on ImageNet dataset is 68.3%. The weights and activations of adder operation in AdderViT exhibit similar distributions to those of CNN AdderNet. Therefore, the proposed quantization method can be adopted to quantize the weights and activations involved in the adder linear transformation and adder multi-head self-attention in AdderViT. Specifically, the number of groups we use is 4, the hyper-parameter $\\alpha$ controlling the ratio of discarded outliers in activations is set to 0.9992, only PTQ is adopted here due to limited time. Note that the PTQ accuracy can be further improved with our QAT method. The top-1 accuracy drop under various bits is reported below:\n\n| | W8A8 (%) | W6A6 (%) | W4A4 (%) |\n| :-: | :-: | :-: | :-: |\n| Ours | -0.5 | -4.1 | -8.7 |\n| QSSF[3] | -1.7 | -6.5 | -16.3 |\n\nThe advantage of our method over QSSF[3] is significant. For example, at W4A4, the accuracy drop achieved by our method is 8.7%, which is much lower than the 16.3% of QSSF. \n\n**Q4:** Does the AdderNet compatiable with SSL pretraining? e.g., MAE pretraining, and how the quantization scheme different for different stages?\n\n**A4:** Thanks for the insightful questions. The full-precision AdderNet has made impressive progress across different networks including CNNs and Transformers. Therefore, we believe that AdderNet is also likely to achieve some interesting results for MAE pretraining. However, this is far beyond the scope of our paper, and we cannot give detailed experimental results in a limited time. Overall, AdderNet has plenty of room for exploration, and we hope that our work on quantization of AdderNet can shed some light on the multiplication-less neural network and the quantization community.",
" **Q5:** How is the latency or throughput in real devices? Have you deployed the trained models to real devices yet?\n\n**A5:** For the modern CPU or GPU devices, a multiplication can be executed in a single cycle, which is roughly as fast as addition (https://stackoverflow.com/questions/21819682/is-integer-multiplication-really-done-at-the-same-speed-as-addition-on-a-modern).\n\nWe measure the inference latency of full-precision AdderNets and CNNs on a single NVIDIA Tesla V100 GPU with 1x3x224x224 input. ResNet-18 architecture is adopted and the results are provided as follows. We can see that the latencies of CNN and AdderNet are very similar. To achieve higher speed on GPU, more engineering works like professional CUDA implementation or dedicated hardware unit support, are required to handle the intensive adder operation.\n\n| CNN (ms) | AdderNet (ms) |\n| :-----------: | :-----------: |\n| $4.06\\pm0.29$ | $4.18\\pm0.35$ |\n\nThe most important benefits of AdderNet are reducing energy and circuit resources. These advantages of AdderNet will be maximally utilized by specific hardware implementation such as FPGA and ASIC. For example, QSSF[3] has deployed the quantized AdderNet on FPGA platform. Compared to 8-bit CNN, the logic resource utilization and energy consumption of 8-bit AdderNet can be reduced by 61.9% and 56.6%, respectively.\n\n**Q6:** How about the accuracy on lower-bit quantization, e.g., 3-bit? Fig. 6 shows ... Could you show comparisons with more CNN quantization methods?\n\n**A6:** We supplement the 3-bit PTQ quantization experiment of adder ResNet-20 on CIFAR-100 dataset. Besides, the comparisons with more CNN quantization methods are also supplemented. The detailed accuracy drops are reported below:\n\n| | W4A4 (%) | W3A3 (%) |\n| :-------------: | :------: | :------: |\n| AddNN | -1.83 | -6.02 |\n| CNN AdaRound[4] | -2.01 | -6.77 |\n| CNN BRECQ[5] | -1.74 | -5.95 |\n| CNN QDROP[6] | -1.70 | -5.86 |\n\nQDROP[6] establishes a SOTA result for PTQ of CNNs, as far as we know. The difference in the quantization method of CNN basically does not affect its energy consumption, which has been reported in the first question above. The accuracy drop of the the quantized AdderNet is slightly higher when compared with CNN QDROP and CNN BRECQ, but the energy advantage of the quantized AdderNet is more pronounced.\n\n**Q7:** Is the quantization bit of output (before adding the bias) consistent with that of input and weight? Do you also quantize gradients and errors during training?\n\n**A7:** The output bits are usually higher than input and weight to prevent overflow of intermediate results, which is also a common practice for quantized CNNs. Our work focuses on the quantizing of weights and activations for inference. Quantizing gradients and errors is another topic of training acceleration, which is not covered in this paper.\n\n**Q8:** The authors cluster weights in AdderNet into several groups, ... I am concerned about whether it will consume more energy to transfer the aforementioned different quantized inputs when deployed onto mobile devices.\n\n**A8:** 1) The main computational cost of neural network relies on the matrix multiplications. The increased FLOPs introduced by the multiple inter-group scale factors are negligible as presented in L172-L173 of our paper, the detailed proof is presented in A.1 in the supplementary material. 2) In addition, we only need to quantize the activations during inference since the weights can be quantized offline and saved. The proposed activation quantization is an elementwise operation, which does not affect the most energy-intensive convolution operation. Therefore, multiple inter-group scale factors have little effect on the overall energy consumption.\n\n**Q9.** Do you have any plan to open source the kernel implementation of AdderNet and corresponding quantization schemes?\n\n**A9:** Yes, we are glad to open-source the kernel implementation of AdderNet and the corresponding quantization methods, and we hope that our work on quantization of AdderNet can bring some contributions to the multiplication-less neural network and the quantization community.\n\n[1] Chen X, et al. An empirical study of adder neural networks for object detection[J]. NeurIPS, 2021.\n\n[2] Shu H, et al. Adder attention for vision transformer[J]. NeurIPS, 2021.\n\n[3] Wang Y, et al. AdderNet and its minimalist hardware design for energy-efficient artificial intelligence[J]. arXiv, 2021.\n\n[4] Nagel M, et al. Up or down? adaptive rounding for post-training quantization[C]. ICML, 2020.\n\n[5] Li Y, et al. Brecq: Pushing the limit of post-training quantization by block reconstruction[J]. arXiv, 2021.\n\n[6] Wei X, et al. QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization[J]. arXiv, 2022.\n",
" We would like to sincerely thank the reviewer for providing a constructive review and detailed comments.\n\n**Q1:** Some of the claims are not backed up by the method. Specifically, the authors mention that a shortcoming of prior work is using the same scale for weights and activations which is decided based on either of the two and therefore may not best fit the other. The proposed method also adopts the same scheme, where the scales are still determined by either the weights or the activations with the only difference being the increased granularity of the scale choices due to the channel clustering.\n\n**A1:** Thanks. Firstly, please allow us to correct one point: what our paper claims is that the prior works simply adopt only one shared scale to quantize both the weights and activations simultaneously (L8, L46-L47), which ignores the properties of the distribution of the weights and activations of AdderNet, leading to the problems of \"over clamp\" and \"bits waste\", further leading to poor quantized accuracy. Secondly, as we emphasized in paper, due to the limitation that the commutative law in multiplication does not hold in adder operation, non-shared scales cannot be adopted when quantizing the weights and activations in adder convolution (Eq.7). Therefore, the target of our work is how to solve the problems of \"over clamp\" and \"bits waste\" caused by adopting only one shared scale in prior works under the restriction that only the same scales between weights and activations can be adopted, and finally improve the accuracy of quantized AdderNet. Our method consists of three parts: clustering-based weights grouping, lossless range clamp for weights and outliers clamp for activations. Clustering-based weights grouping is only one part of our method. From Tab.3 in paper, using weights grouping alone can improve the performance, and combining all the proposed components obtains a much higher accuracy.\n\n**Q2:** Some questions remain regarding applying the method to new models, e.g., how to determine the number of clusters for new benchmarks.\n\n**A2:** The number of clusters is set empirically. Generally, when the number of clusters is equal to 4, the quantized AdderNets tend to achieve higher accuracy. From the Tab.4 in paper, more groups do not necessarily result in higher accuracy. \n\n**Q3:** It appears that the models are not trained with a weight regularization term which leads to a wide range of weight values in Figure 2. A standing question is whether adding strict regularization to training can change the effectiveness of the proposed method? In that scenario, the weights are forced to be within certain bounds, which may reduce the benefits of clustering as the same scale may likely work for all weights.\n\n**A3:** It's an interesting question. Adding some tricks like a higher L2 regularization or weight standardization[1] can effectively narrow the range of weight values. However, it tends to result in lower accuracy of the full-precision AdderNet, let alone the quantized accuracy. We perform the experiments of full-precision adder ResNet-20 on CIFAR-100 dataset with various L2 regularization or weight standardization. Correspondingly, the mean and variance of the absolute values in the fully trained *layer1.1.conv2* are taken as an example to show the properties of weight values. The results are provided as follows:\n\n| | 5e-4 (Our paper) | 1e-3 | 5e-3 | WS |\n| :---------: | :--------------: | :---: | :---: | :---: |\n| Acc (%) | 67.59 | 55.61 | 41.23 | 61.52 |\n| Mean(\\|W\\|) | 6.05 | 2.13 | 0.26 | 0.65 |\n| Var(\\|W\\|) | 90.33 | 38.27 | 17.70 | 0.58 |\n\n**Q4:** The $\\mathbb{I}$ term in equation 13 of the paper merely depends on the weight values and not the activations. Therefore, while the derived scale is perhaps optimized for the weights, there is no guarantee that it will work for the range of activations. \n\n**A4:** Thanks for the nice concern. As the reviewer said, the $\\mathbb{I}$ term in Eq.13 is calculated merely depends on the weights and not the activations. However, this does not mean that the derived scale is only optimized for weights without taking activations into account. As we stated in L161-L167 of the paper, for the majority groups where the range of weights exceeds the range of activations, the scheme of lossless range clamp for weights is adopted (Fig.3 (c)). That is, the weights is clamped to the range of activations and the error caused by clamping weights is absorbed inside the layer bias, then the scale is derived depends on the range of activations. For the minority groups where the range of weights is within the range of activations, the difference between the range of weights and activations is usually small, so the scale derived based on the range of weights can be adopted for quantizing both the weights and activations. In summary, the derived scales take both weights and activations into account.",
" **Q5:** The accuracy gains of the proposed method may be due to the more fine-grained weight scales, rather than the fact that the scales are actually tailored to both the activation and the weights.\n\n**A5:** Thanks for the nice concern. In the above question, we explained that the scales are actually derived by taking both weights and activations into account. Clustering-based weights grouping, i.e., more fine-grained weight scales, is only a part of our method. Our technical contributions include 1) distribution analysis of AdderNet, 2) group-shared quantization scales, 3) clustering-based weights grouping, 4) lossless range clamp for weights, and 5) outliers clamp for activations. The ablation study on sub-methods is shown in Tab.3 in the paper. Here we supplement the ablation study on ImageNet dataset with adder ResNet-18 architecture to better clarify the effectiveness of each part of our method. The 4-bit post-training quantized top-1 accuracy is reported below:\n\n| Group-shared scales | Weights clamp | Acts clamp | Acc (%) |\n| :-----------------: | :-----------: | :----------: | :---------: |\n| | | | 58.0 |\n| $\\checkmark$ | | | 61.7 (+3.7) |\n| | $\\checkmark$ | | 63.2 (+5.2) |\n| | | $\\checkmark$ | 61.0 (+3.0) |\n| | $\\checkmark$ | $\\checkmark$ | 64.9 (+6.9) |\n| $\\checkmark$ | $\\checkmark$ | | 65.3 (+7.3) |\n| $\\checkmark$ | $\\checkmark$ | $\\checkmark$ | 66.5 (+8.5) |\n\n**Q6:** The authors have not discussed the limitations or potential negative social impact of their work.\n\n**A6:** We have discussed the limitations and potential negative social impact of our work in A.5 in the supplementary material. For more details, please refer to this section.\n\n[1] Qiao S, Wang H, Liu C, et al. Micro-batch training with batch-channel normalization and weight standardization[J]. arXiv preprint arXiv:1903.10520, 2019.\n",
" Sincerely thanks for your constructive comments and support.\n\n**Q1:** Besides the FLOPs and energy, it would be great to report the inference time of the proposed method.\n\n**A1:** Thanks for the suggestion. For the modern CPU or GPU devices, a multiplication can be executed in a single cycle, which is roughly as fast as addition (https://stackoverflow.com/questions/21819682/is-integer-multiplication-really-done-at-the-same-speed-as-addition-on-a-modern). \n\nWe measure the inference latency of full-precision AdderNets and CNNs on a single NVIDIA Tesla V100 GPU with 1x3x224x224 input. ResNet-18 architecture is adopted and the results are provided as follows. We can see that the latencies of CNN and AdderNet are very similar. To achieve higher speed on GPU, more engineering works like professional CUDA implementation or dedicated hardware unit support, are required to handle the intensive adder operation.\n\n| CNN (ms) | AdderNet (ms) |\n| :-----------: | :-----------: |\n| $4.06\\pm0.29$ | $4.18\\pm0.35$ |\n\n**Q2:** It is highly recommended to add a more detailed recap about AdderNet, which will make the whole manuscript smoother, especially for those who do not familiar with it.\n\n**A2:** Thanks for the good suggestion. In L74-L86 in the paper, we provided a concise introduction and comparison of the computational paradigms of CNN and AdderNet. We will add more detailed recap on AdderNet and update the manuscript as the reviewer suggest.\n\n**Q3:** In Fig. 3, how is the -126.2 calculated? There is no detailed explanation about it.\n\n**A3:** Thanks for the question. The bias term -126.2 is calculated according to Eq.16, i.e., lossless range clamp for weights to the range of activations. Specifically, in Fig.3, $r_x=1.6$, then the bias term is equal to :$-((16.8-1.6)+(12.3-1.6)+(15.4-1.6)+(11.3-1.6)+(21.0-1.6)+(19.2-1.6)+(10.0-1.6)+(18.4-1.6)+(16.2-1.6))=-126.2$\n\nWe will add detailed explanation where appropriate and update the manuscript accordingly.\n\n**Q4:** In line 45, “L1-norm quantization” is unclear. Does it mean an L1-norm-based quantization method or quantization for L1-norm operation?\n\n**A4:** Thanks for the nice concern. \"L1-norm quantization\" means the quantization for L1-norm operation (Eq.2). We will fix this ambiguity in the updated manuscript.\n\n**Q5:** The strange “rec” symbol in Line 193.\n\n**A5:** Thanks for the nice concern. The \"rec\" symbol in L193 indicates the end of the proof of Theorem 1, which is in line with mathematical norms.\n\n**Q6:** There are several minor grammar issues.\n\n**A6:** Thanks for pointing out the minor grammar issues in the paper. These issues will be corrected as the reviewer suggest in the updated manuscript.",
" Thanks for your strong support and detailed comments.\n\n**Q1:** The values in Fig. 4 are too small to read. The authors are required to refine them.\n\n**A1:** Thanks for the suggestion. This problem will be fixed in the updated manuscript.\n\n**Q2:** The histogram for INT4 weights adjacent to “over clamp” is significantly higher (Fig.1 in Appendix), however, this phenomenon is not expressed in the top of Fig.1 (c). The authors are advised to revise this detail for better presentation.\n\n**A2:** Thanks for pointing out this problem. The histograms closer to \"over clamp\" are indeed higher, because the out-of -range values are all truncated here. We will correct the Fig.1 (c) accordingly in the updated manuscript.\n\n**Q3:** Is it a common phenomenon that the distribution of weights and activations in pre-trained adder neural networks differs greatly?\n\n**A3:** Thanks for the good question. The full-precision AdderNet[1] on classification task first revealed this phenomena, that is, the absolute ranges of weights vary widely between output channels, and the absolute ranges of activations and weights also vary widely. Subsequent work[2] on detection task also revealed this phenomena. If some restrictions such as a higher L2 regularization or weight standardization[3] are added to weights druing training full-precision AdderNet, the weight values can be effectively narrowed. However, it tends to result in lower accuracy of the full-precision AdderNet. We perform the experiments of full-precision adder ResNet-20 on CIFAR-100 dataset with various L2 regularization or weight standardization. Correspondingly, the mean and variance of the absolute values in the fully trained *layer1.1.conv2* are taken as an example to show the properties of weight values. The results are provided as follows:\n\n| | 5e-4 (Our paper) | 1e-3 | 5e-3 | WS |\n| :-: | :--------------: | :---: | :---: | :---: |\n| Acc (%) | 67.59 | 55.61 | 41.23 | 61.52 |\n| Mean(\\|W\\|) | 6.05 | 2.13 | 0.26 | 0.65 |\n| Var(\\|W\\|) | 90.33 | 38.27 | 17.70 | 0.58 |\n\n**Q4:** Can the authors further explain the meaning of the values marked in red in Fig.3 (b) and Fig.3 (c)? How are these values calculated?\n\n**A4:** Thanks for the nice concern. The value -1.6 marked in red in Fig.3 (b) denotes the value of activations after outliers clamping. Specifically, $\\mathbb{X}= ${$1.4,1.2,1.1,4.5,0.0,0.6,0.4,0.8,1.6$}, then the sorted $\\mathbb{\\widetilde{X}}=${$0.0,0.4,0.6,0.8,1.1,1.2,1.4,1.6,4.5$}. We select the value$r_x=\\mathbb{\\widetilde{X}}[\\lfloor0.9*8\\rceil]=\\mathbb{\\widetilde{X}}[7]=1.6$ as the range of activations for the next calculation of scale. Correspondingly, any activations whose absolute value exceeds 1.6 will be clamped to 1.6 or -1.6, i.e., -4.5 clamped to -1.6.\n\nIn Fig.3 (c), the values in $W_{3}$ are clamped to the range of activations after the step of outliers clamp for activations, i.e., 1.6. The bias term -126.2 marked in red is calculated according to Eq.16. That is, $-((16.8-1.6)+(12.3-1.6)+(15.4-1.6)+(11.3-1.6)+(21.0-1.6)+(19.2-1.6)+(10.0-1.6)+(18.4-1.6)+(16.2-1.6))=-126.2$\n\n**Q5:** The results in Fig.5 (a) are impressive, yet the authors are advised to make more discussions on the results to make the paper stronger.\n\n**A5:** Thanks for the nice suggestion. QSSF simply adopt only one shared scale to quantize both the weights and activations simultaneously, which ignores the properties of the distribution of the weights and activations of AdderNet, leading to the problems of \"over clamp\" and \"bits waste\", further leading to poor accuracy. In contrast, we propose a quantization method consisting of three parts: clustering-based weights grouping, lossless range clamp for weights and outliers clamp for activations. The problems of \"over clamp\" and \"bits waste\" can be effectively resolved with our quantization method, further leading to high quantized accuracy. Compared with QSSF, the advantage of our method is significant, especially at low bits. For example, in the case of 4-bit, our method achieves 8.5% higher accuracy than QSSF.\n\n**Q6:** Writing suggestion: Line 166: “both weights and features” should be “both weights and activations”, to be consistent with the full text.\n\n**A6:** Thanks for the suggestion. In the community of quantization, \"activations\" is indeed used more often than \"features\", we will correct to \"both weights and activations\" as the reviewer suggest.\n\n[1] Chen H, Wang Y, Xu C, et al. AdderNet: Do we really need multiplications in deep learning?[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 1468-1477.\n\n[2] Chen X, Xu C, Dong M, et al. An empirical study of adder neural networks for object detection[J]. Advances in Neural Information Processing Systems, 2021, 34: 6894-6905.\n\n[3] Qiao S, Wang H, Liu C, et al. Micro-batch training with batch-channel normalization and weight standardization[J]. arXiv preprint arXiv:1903.10520, 2019.",
" This paper first thoroughly analyzes the difference in distributions of weights and activations in AdderNet and then proposes a new quantization algorithm by redistributing the weights and the activations. Strengths: \n1. This paper conducts a thorough study of the dilemma in AdderNet quantization, and proposes an effective method to solve this problem. \n2. The paper is clearly presented.\n3. I am glad to see that the accuracy drop is within 2% for ImageNet even at 4 bits.\n\nWeaknesses (suggestions): \n\n1. The accuracy and energy comparisons with quantized CNN seem to be not very adequate. Better to compare the accuracy drops in quantized CNNs as well as the currently presented accuracy drop in AdderNet. So the reader can get the full information whether AdderNet is more advanced as compared to CNNs in terms of quantization.\n\n2. AdderNet is a specific neural network, it is not clear whether the proposed methods can be generalized to other neural networks will similar distribution properties.\n\n3. Only classification results are shown, how about other downstrem tasks, e.g., detection, segemetation. Or other network architectures, e.g., ViTs with adder kernels.\n\n4. Does the AdderNet compatiable with SSL pretraining? e.g., MAE pretraining, and how the quantization scheme different for pretraining stage and fine-tuning or normal training stages?\n\n5. How is the latency or throughput in real devices? I am curious of this since there is no available efficient CUDA implementation of AdderNet open-sourced as of now. Also, quantization is also not efficient for general CPU/GPU devices. Have you deployed the trained models to real devices yet?\n\nI am open to further boost the score or champion this paper if the rebuttal is also sound and can somewhat solve my quesions/concerns. 1. The authors show the quantization results from 8bit to 4bit, and I am curious about the accuracy on lower-bit quantization, e.g., 3bit?\nFig. 6 shows energy and accuracy comparisons with CNN quantized via BRECQ PTQ method. Could you show comparisons with more CNN quantization methods?\n\n2. Is the quantization bit of output (before adding the bias) consistent with that of input and weight? Do you also quantize gradients and errors during training?\n\n3. The authors cluster weights in AdderNet into several groups, and then leverage different inter-group scale factors to quantize both weights and input. By doing this, I am concerned about whether it will consume more energy to transfer the aforementioned different quantized inputs when deployed onto mobile devices.\n\n4. Do you have any plan to open source the kernel implementation of AdderNet and corresponding quantization schemes?\n\n---\nAfter reading the author's post, most of my questions are resolved and I am tending to accept it. The new experiments or tables can be added to the revision. The comparison with CNN quantization method seems to be not very adequate.",
" The paper proposes a new quantization scheme for Addernets. Specifically, the authors propose to cluster model weights in the channel dimension, where each cluster is assigned its own scale value for quantization. This ensures that the scales can better represent the range of values, which may be different for different weight channels. The authors further propose to absorb the error caused by clamping the weights inside the layer bias, which helps restore accuracy. Finally, the proposed method removes outliers when quantizing the activations, therefore tailoring the scale to better represent the valid range of data. Strengths:\n- The paper is well-written and the ideas are clearly explained.\n- The proposed method significantly improves the accuracy after quantization, compared to prior methods.\n\nWeaknesses:\n- Some of the claims are not backed up by the method. Specifically, the authors mention that a shortcoming of prior work is using the same scale for weights and activations which is decided based on either of the two and therefore may not best fit the other. The proposed method also adopts the same scheme, where the scales are still determined by either the weights or the activations with the only difference being the increased granularity of the scale choices due to the channel clustering. Please find more details on this in the next section.\n- Some questions remain regarding applying the method to new models, e.g., how to determine the number of clusters for new benchmarks. Questions:\n- It appears that the models are not trained with a weight regularization term which leads to a wide range of weight values in Figure 2. A standing question is whether adding strict regularization to training can change the effectiveness of the proposed method? In that scenario, the weights are forced to be within certain bounds, which may reduce the benefits of clustering as the same scale may likely work for all weights. \n\nLimitations:\n- The I term in equation 13 of the paper merely depends on the weight values and not the activations. Therefore, while the derived scale is perhaps optimized for the weights, there is no guarantee that it will work for the range of activations. The accuracy gains of the proposed method may therefore be due to the more fine-grained weight scales, rather than the fact that the scales are actually tailored to both the activation and the weights, which is what is assumed after reading the introduction. The authors have not discussed the limitations or potential negative social impact of their work.",
" This manuscript focuses on the problem of the quantization of AdderNet. The author has investigated the difference between AdderNet and traditional networks. Based on the differences, the dedicated quantization is achieved by redistributing the weights and activation of AdderNet. In the quantization method, three techniques are proposed to overcome the bit waste and over clamp problems, including clustering-based grouping quantization, range clamp of weights, and outlier clamp of activations. Experimental results show the effectiveness of the proposed method for quantizing AdderNet with different bit widths. Pros:\n\n- The manuscript is easy to follow. The analysis of the difference between conventional quantization methods for CNN and that for AdderNet is interesting. The statistics of the activations and weights of a pre-trained AdderNet are good.\n\n- As a new kind of efficient neural network (AdderNet), how to effectively quantize it is a challenging problem. Quantization of such NN would put forward a faster and more energy-efficient model.\n\n- The proposed method, includes clustering-based grouping quantization, range clamp of weights, and outlier clamp of activations, is promising for addressing the bit waste or over-clamp issues within AdderNet, which is also verified by the extension experiments.\n\nCons:\n\n- Besides the FLOPs and energy, it would be great to report the inference time of the proposed method.\n\n- It is highly recommended to add a more detailed recap about AdderNet, which will make the whole manuscript smoother, especially for those who do not familiar with it.\n\n- In Fig. 3, how is the `-126.2` calculated? There is no detailed explanation about it.\n\n- In line 45, “L1-norm quantization” is unclear. Does it mean an L1-norm-based quantization method or quantization for L1-norm operation?\n\nMinor issues\n\n-There is a strange “rec” symbol in Line 193\n\n- There are several minor grammar issues. For example, in line 21: “well-verified” should be “well verified”.\n Please see the weakness part. Yes",
" Quantization is an effective method to further reduce the energy consumption of AdderNets. However, previous AdderNets quantization methods cannot properly handle the challenge of large differences in weights and activations and often lead to a large accuracy degradation, especially in the case of low-bit (4-bit). This paper first reveals the key reasons for the poor accuracy of previous AdderNets quantization methods, namely “over clamp” and “bits waste”. Then a novel quantization method for AdderNets is proposed. Experiments on several datasets and models demonstrate the effectiveness of the proposed method. Strengths\n1. The paper is extremely well structured and easy to follow, with motivation well-explained.\n2. To my knowledge, this paper is by far the most comprehensive and systematic study of the quantization of AdderNets. Through thorough analysis, this paper concludes two main reasons for the poor accuracy of previous AdderNets quantization methods, namely “over clamp” and “bits waste”, which are insightful. The proposed scheme of the clustering-based weights grouping and the lossless range clamp for weights are interesting and novel.\n3. Extensive experiments on different models and datasets. Superior performance compared to other AdderNets quantization methods. The thorough ablation studies verifies the effectiveness of each components. The distributions of weights and activations (Fig.1 in Appendix) demonstrate that the proposed method can effectively solve the problem of “over clamp” and “bits waste”, leading to a higher quantized performance.\nWeaknesses\n1. The values in Fig. 4 are too small to read. The authors are required to refine them.\n2. The histogram for INT4 weights adjacent to “over clamp” is significantly higher (Fig.1 in Appendix), however, this phenomenon is not expressed in the top of Fig.1 (c). The authors are advised to revise this detail for better presentation. 1. Is it a common phenomenon that the distribution of weights and activations in pre-trained adder neural networks differs greatly?\n2. Can the authors further explain the meaning of the values marked in red in Fig.3 (b) and Fig.3 (c)? How are these values calculated?\n3. The results in Fig.5 (a) are impressive, yet the authors are advised to make more discussions on the results to make the paper stronger.\n4. Writing suggestion: Line 166: “both weights and features” should be “both weights and activations”, to be consistent with the full text. The authors have discussed the limitations and potential negative societal impact of their work in Appendix."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"uCm49e8MC6O",
"mhvJdh4d-_e",
"mhvJdh4d-_e",
"16elRJ5NSKq",
"16elRJ5NSKq",
"56aLZhvEFfB",
"Fw55hZ1lwor",
"nips_2022_ZuSiW0EixjX",
"nips_2022_ZuSiW0EixjX",
"nips_2022_ZuSiW0EixjX",
"nips_2022_ZuSiW0EixjX"
] |
nips_2022_wjClgX-muzB | Rethinking Variational Inference for Probabilistic Programs with Stochastic Support | We introduce Support Decomposition Variational Inference (SDVI), a new variational inference (VI) approach for probabilistic programs with stochastic support. Existing approaches to this problem rely on designing a single global variational guide on a variable-by-variable basis, while maintaining the stochastic control flow of the original program. SDVI instead breaks the program down into sub-programs with static support, before automatically building separate sub-guides for each. This decomposition significantly aids in the construction of suitable variational families, enabling, in turn, substantial improvements in inference performance. | Accept | The reviewers have reached consensus after processing the authors' feedback. They all agree that this manuscript presents an interesting approach to applying variational inference in a setting of probabilistic programming that is of interest to the community. The reviewers raise tangible points that the authors have incorporated into their revision. I recommend that the authors continue to polish their manuscript to clearly address these points in the final version of their manuscript. | train | [
"8MezfFjm8bt",
"nQ0By83IMUD",
"86sQmhMJLzm",
"H53vUFd2Swq",
"FXeiCrS63gp",
"OM17mcSahrK",
"0FhZ0LvnW_Z",
"LGkISGLHysv",
"4F8cfP7WDTr",
"PgtWuAF-YT",
"JU9ggTsMaFt",
"Ed6I8MqQ5h",
"8QgoRWkQNaF",
"eS9LdC6xNCW",
"oGETq0ZD_P0",
"gaJxoZPu_0P",
"ypDiqxlDQWX",
"m3SttBGt3EB",
"odOLPGMUpK2",
"ZgkiugRDayY",
"Ip8iKEAWb66",
"AlhZVqR5mIv",
"YJok-jP1WDH"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We agree that there are already more complex, custom variational families that were proposed for specific models with stochastic support and that the section you highlight could be more clear about this. We will emphasize more clearly that we are focusing on automated guide construction methods and we are going to add a discussion of the fact that bespoke guides with more complex structures have been developed for specific models and that existing PPSs such as Pyro provide users with tools to express these bespoke guides easily.",
" Thank you very much for your positive response to our comments and increasing your score! We appreciate the time and effort that you have put into this review, we believe it has clearly helped to improve the paper.\n\n> Your explanation here is very clear. I have checked the modified text in the paper and I believe it could still be more clearly signposted [...]\n\nThank you very much for your feedback. It is very valuable and will significantly improve the exposition of our method. We will incorporate your proposed changes and add extra discussion in L240 to highlight that the surrogate density is only used temporarily. Further, we will remove the “Finally” from L262 and emphasize that the surrogate ELBO is only suitable for optimizing the variational parameters.\n",
" I think the revised introduction presents the state of affairs more accurately. One part that I still disagree with slightly is the claim that \"existing approaches all use a single global guide that mirrors the control flow of the input program, then introduce a variational approximation for each unique variable.\" Read in context, \"existing approaches\" here refers to all guide construction methods (including manual guide construction), and I believe there are several prominent examples of more complex variational families that do not use mean-field approximations, and do not follow the same control flow as the model.\n\nFor instance, consider the Attend Infer Repeat model (https://arxiv.org/pdf/1603.08575.pdf, Figure 2): in the model program, a geometrically distributed number is sampled, and that many objects are generated. However, in the guide program, a `while` loop is used, in which a recurrent network repeatedly proposes a new object until deciding to stop.\n\nOf course, AIR is quite different from your proposed method. However, I believe that in the probabilistic programming community, especially among those who work on languages with support for variational inference with custom guides, it is very much already a goal to support models like AIR, with variational families that are *not* mean-field and which handle control flow in a smart way.",
" Thank you to the authors for the very thorough response. As I mentioned, my initial 'borderline' score was due primarily to my uncertainty that I had understood the algorithm, and particularly what role the rejection sampling played. Your explanation here is very clear. I have checked the modified text in the paper and I believe it could still be more clearly signposted:\n\n* On L240, maybe you could clearly signpost that you are introducing the surrogate target temporarily, solely for optimizing the local SLP.\n\n* More importantly, L262 is the bit I find confusing --- the \"finally\" makes it sound as though you are introducing a final modification to the SLP training procedure. I think it would help to explicitly explain at the start of this paragraph that the surrogate ELBO is only used during training, and that when estimating the local ELBO for pruning SLPs and setting the final mixture weights, a different solution is required.\n\n> We included the Pyro AutoGuide in the experiment in Section 6.1 primarily because it was actually outperforming BBVI despite the bias (presumably because this was outweighed by gains in variance) and so it felt inappropriate to omit it. We are happy to remove it if you still think it is a problem though.\n\nThank you for the explanation. I do think it makes the most sense to compare to other unbiased gradient estimators---whether the bias of reparam will be acceptably low seems very dependent on the model, and outside the scope of the paper. But I understand your reasoning and am happy to leave this to your judgment. \n\n> Namely, utilising the fact that an extra content page is allowed for the camera ready (but not the revision now), we plan to add the following discussion: ...\n\nThank you -- I think this discussion is nicely written. \n\n> However, we would like to point out that it isn't standard practice to construct the custom variational families/guides based on decomposing the input program into SLPs: the process laid out at the start of Section 3 is still very much the norm for manually constructed guides. Moreover, even constructing an SLP-based variational family manually is actually very difficult without some of the techniques we have introduced here, such as our use of surrogate ELBOs and our resource allocation scheme. Therefore, our work is also relevant beyond the setting of automated inference because it provides evidence of the benefit of this decomposition and introduces many of the tools needed to utilize it.\n\nThanks, I agree, and I think the paper will benefit from acknowledging this explicitly.\n\nThis very thorough author response has addressed my concerns and I will raise my score to a 7.",
" Thank you very much for your response and we are glad that you found our clarifications helpful!",
" Thank you for considering our comments. We are glad to see that you have increased our score as a result.",
" Thank you for the clarifications. I have carefully read your response to all the reviews and I believe the authors have answered most of the concerns. I am leaning towards the acceptance of this paper and ill change my score to accept. ",
" Thank you for clarifying. I will take your responses here into account in revising my review.",
" Thank you for taking the time to respond to our comments! We are delighted to hear that you are increasing your score.",
" Thank you a lot for the clarification! Though I am still a bit concern about the practical aspect of the approach and the challenges around it, I'm happy to increase the score to reflect my feeling about the impact of the work.",
" Dear Reviewers and AC\n\nThank you again for all your hard work.\n\nWe just wanted to quickly follow up again about our rebuttal to the reviews as there is not long left for review-author discussions. Please let us know if our responses have alleviated your concerns or if there is anything you would like further clarification or additional changes on.\n\nMany thanks!\n\nPaper 3173 Authors",
" > W2 cont. [...] For example, L176 says of the SLP discovery process that \"experiments show it can reliably identify all SLPs with non-negligible posterior mass.\" [...]\n\nWe agree that this sentence should be rephrased and was a poor choice of words. We meant to convey that in the specific experiments we consider this method is able to identify SLPs that overall lead to non-negligible ELBO values, but you are right that this will not always be the case (such as in the example you suggest); we have made this clearer. We further note here that the mechanism to find SLPs is a modular part of our algorithm and we will add some further discussion on when it might be necessary to switch to other methods like MCMC sampling [35] or static analysis of the program code [33,44,45].\n\n\n> W3. I find Figure 1 and its explanation on L99-110 misleading\n\nWe apologize that these were not clear and agree they could be misleading; we were trying to make two separate points that have become inappropriately conflated when trying to summarize them in a single figure. The main goal of Fig. 1 and L99-110 is to highlight the shortfalls of BBVI (the blue line in Fig. 1). Here, BBVI is using the same guide form as the Pyro AutoGuide but with the bias in gradient estimates corrected. Note that BBVI also converges to a sub-optimal solution which is due to the form of the variational family. \n\nWe included the Pyro AutoGuide method so that the readers can see the current default behaviour for programs with stochastic support. Noting that the high variance issues one often experiences using REINFORCE gradients can actually cause larger final errors than the bias of invalidly using reparameterization, the default behaviour of Pyro AutoGuide is not entirely gratuitous without a method like SDVI. As you say yourself, one of the advantages of our approach is in reducing variance and thus avoiding this problem trade-off, so we would argue the failures of Pyro AutoGuide are not completely tangential. However, we appreciate that this link is very subtle and not the conclusion the reader will most naturally make. We have thus decided to just remove the Pyro AutoGuide from Fig 1 and have updated the writing accordingly.\n\n> W3 cont. This also applies to experiments that report results using biased gradient estimators—to me that feels like an orthogonal issue.\n\nWe included the Pyro AutoGuide in the experiment in Section 6.1 primarily because it was actually outperforming BBVI despite the bias (presumably because this was outweighed by gains in variance) and so it felt inappropriate to omit it. We are happy to remove it if you still think it is a problem though.\n\n> W4. I don't think the introduction (e.g. L24-42) gives an accurate summary of the state of the art. It is true that many PPLs [...] aim to support variational inference, but the aim has not in general been to automate the construction of guides [...] the scope of the problem within the broader field [...] should be clearly described: many PPLs already support accurate variational inference (with correct gradient estimates) in models with stochastic control flow if the user provides a sensible variational family. [...]\n\nWe agree that there have been many significant advances in the design of PPL which allows users to express custom guides. We have tried to emphasise that our paper focuses on the case when the guide should be constructed automatically. We have made some initial edits to the introduction to make this clearer, and will use the extra space for the camera ready submission to provide a more extensive discussion of these systems.\n\nHowever, we would like to point out that it isn't standard practice to construct the custom variational families/guides based on decomposing the input program into SLPs: the process laid out at the start of Section 3 is still very much the norm for manually constructed guides. Moreover, even constructing an SLP-based variational family manually is actually very difficult without some of the techniques we have introduced here, such as our use of surrogate ELBOs and our resource allocation scheme. Therefore, our work is also relevant beyond the setting of automated inference because it provides evidence of the benefit of this decomposition and introduces many of the tools needed to utilize it.\n\n> Small nitpick: on line 63, I don't think \"higher-order\" is relevant -- a first-order language with branching and recursion yields the same class of models.\n\nWe have updated the writing to reflect that. Thank you for pointing this out.\n",
" > Q2. [...] I am confused about how the rejection sampling in Section 4.5 is supposed to work. [...] How do you compute Z and its gradient?\n\nAs mentioned above, the rejection sampling step is only used to evaluate the ELBO and sampling from the learned variational approximation, we don’t use it for training and therefore do not ever need to estimate its gradients. \n\nTo be precise on how we estimate it, we first draw $N$ samples $\\{x^{(i)}\\}_{i=1}^N$ from $\\tilde{q}_k$. We then reject samples which do not fall into the SLP and estimate $\\tilde{Z}_k$ as the acceptance rate of this sampler (i.e. $N_A/N$ where $N_A$ is the number of samples accepted). Finally, using $A$ to denote the set of indices of accepted samples, we form our ELBO estimate as\n$$\n\\hat{\\mathcal{L}}_k = \\frac{1}{N_A} \\sum\\_{i \\in A} \\log \\frac{N_A \\gamma_k (x^{(i)})}{N \\tilde{q}_k(x^{(i)};\\phi_k)}.\n$$\nNote here that $A$ and $N_A$ are random variables that both implicitly depend on $\\phi_k$, which is why we can use this for estimation, but not training.\n\nWe have added the explicit form of this estimator to the new section Appendix H.\n\n> W1. Rejection sampling [...] introduces a Z term [...] which depends on the guide's parameters and so must be considered during optimization, appears to be intractable, and the authors do not explain how it is computed when estimating the ELBO and the ELBO's gradient.\n\nWe believe this concern stems mostly from a misunderstanding: as previously alluded to, rejection sampling is not used in training the individual $\\tilde{q}_k$ and so $\\tilde{Z}_k$ (and its gradients) do not need to be calculated in the optimization of $\\phi_k$. However, we appreciate that we could have been clearer in explaining how the local guides are constructed and trained, and have made adjustments to the paper to clarify this, including making the dependency of $\\tilde{Z}_k$ on $\\phi_k$ explicit.\n\n> W2. I did not see any clear discussion of the method's limitations. At times, I thought the authors made misleading claims that downplayed limitations.\n\nWe are very sorry that you felt this, it was never our intention to be misleading about the limitations and we are very happy to make updates to ensure this is not the case. We address some of your more specific points on this below, while we also plan to add a dedicated limitations section to the paper. Namely, utilising the fact that an extra content page is allowed for the camera ready (but not the revision now), we plan to add the following discussion:\n\n“While we believe that SDVI provides a number of significant contributions towards the goal of effective (automated) inference for probabilistic programs with stochastic support, it still naturally has some limitations. Perhaps the most obvious is that it if there is a very large number of SLPs that cannot be easily discounted from having significant posterior mass, it can be challenging to learn effective variational approximations for all of them, such that SDVI is likely to perform poorly if the number becomes too large. Here, customized conventional VI or reversible jump MCMC approaches might be preferable, as they can be set up to focus on the transitions between SLPs, rather than trying to carefully characterize individual SLPs.\n\nAnother limitation is that our current focus on automation means that there are still open questions about how one would construct more customized guides within the SDVI framework. Here the breakdown into individual SLPs and use of resource allocation strategies will still often be useful, but changes to our implementation would be required to allow more user control and customization. For example, the discovery of individual SLPs using the prior is a potential current failure mode and it would be useful to support the use of more sophisticated program analysis techniques (like [45]).\n\nA more subtle limitation is that the local inferences of each SLP can sometimes still be quite challenging themselves. If the true posterior places a lot of mass near the boundaries of the SLP, there can still be a significant posterior discontinuity, meaning we might need advanced local variational families (e.g. normalizing flows) and/or gradient estimators. Such problems also occur in static support settings and are usually much more manageable than the original stochastic support problem, but futher work is needed to fully automate dealing with them.\n\nFinally, variational methods are often used not only for inference, but as a basis for model learning as well. In principle, SDVI could also be used in such settings, but as described in Appendix F, there are still a number of challenges that need to be overcome to do this in practice.”",
" Thank you very much for your extremely thorough and helpful review; you raise a number of excellent points that will help us improve the paper. We hope that our response below and paper update alleviate the concerns that prevented you from making a more confident positive recommendation.\n\n> Q1. [...] you describe two methods for resolving the issue that local guides may not 'stay within' the boundaries of their SLPs. The way I read it, you are applying both solutions (mixing the target with a constant density, and rejection sampling the guide)---but isn't only one necessary?\n\nThis is a very good question that we agree was not properly addressed in the original submission. The short answer is that we need both because we cannot easily use rejection sampling when training the individual variational approximations, but need the truncation when producing samples at test time and evaluating the ELBOs for resource allocation.\n\nMore precisely, truncating the guide via rejection sampling is needed to ensure we produce valid samples for the target SLP, once the individual variational approximations have been learned. Without this, our guides might generate invalid samples at test time that cannot even be evaluated. It is also needed for evaluating the ELBO for $q_k$ (as per the right hand side of Eq. (7)) for resource allocation and constructing $q(k;\\lambda)$, noting that the true ELBO of $\\tilde{q}_k$ is $-\\infty$ if it places any mass outside the SLP.\n\nHowever, such truncation/rejection sampling is generally problematic (see below) when training the individual $\\phi_k$. Here mixing the target with a constant density provides a method to avoid these issues by instead training the $\\phi_k$ to Eq. (11) (note this previously had a typo which may have caused confusion: it should have been an expectation with respect to $\\tilde{q}_k$, not $q_k$). \n\nThe problems with rejection sampling during training relate to the fact that $\\tilde{Z}_k$ depends on $\\phi_k$, exactly as you allude to in Weakness 1 (W1), and the need to learn a rejection sampler with a high acceptance rate. To see these issues, note that $\\mathcal{L}_k(\\phi_k)$ from Eq. (7) can be directly expressed in terms of $\\tilde{q}_k$:\n$$\\mathcal{L}_k(\\phi_k) = \\log \\tilde{Z}_k(\\phi_k) + \\frac{1}{\\tilde{Z}_k(\\phi_k)} \\mathbb{E}\\_{\\tilde{q}_k(x;\\phi_k)}\\left[ \\mathbb{I}(x\\in\\mathcal{X}_k) \\log \\frac{\\gamma_k(x)}{\\tilde{q}_k(x;\\phi_k)} \\right].$$\nNow the dependency of $\\tilde{Z}_k(\\phi_k)$ on $\\phi_k$ makes this an impractical objective for optimizing $\\phi_k$. In particular, though $\\tilde{Z}_k(\\phi_k)$ can easily be estimated using Monte Carlo, we actually cannot generate conventional unbiased estimates of $\\log \\tilde{Z}_k(\\phi_k)$ and $1/\\tilde{Z}_k(\\phi_k)$ (or their gradients) because mapping the Monte Carlo estimator induces a bias. Furthermore, this objective applies no pressure to learn a $\\tilde{q}_k$ with a high acceptance rate, i.e. which actually concentrates on SLP $k$, such that it can easily learn a variational approximation that is very difficult to draw truncated samples from at test time. \n\nBy contrast, using our surrogate objective in Eq. (11) allows us to produce unbiased gradient estimates. Because of the mode seeking behaviour of variational inference, it also naturally forces us to learn a variational approximation with a high acceptance rate, provided we use a suitably low value of $c$ in Eq. (10). If desired, one can even take $c \\rightarrow 0$ during training to learn an approximation which only produces samples from the target SLP without requiring any rejection. We have added a plot in Appendix G that shows that we do indeed learn very high acceptance rates for the problem in Section 6.1.\n\nNote that the surrogate and true ELBOs are exactly equal for any variational approximation that is confined to the SLP (as these have $\\tilde{Z}_k(\\phi_k)=1$). This does not always necessarily mean that they have the same optima in $\\phi_k$ for restricted variational families, even in the limit $c\\rightarrow 0$. However, such differences originate from the fact that the trunctation can itself actually generalize the variational family (e.g. if $\\tilde{q}_k$ is Gaussian, then $q_k$ will be a truncated Gaussians). As such, any hypothetical gains from targetting Eq. (7) directly will always be offset against drops in the acceptance rate of the rejection sampler.\n\nWe appreciate that this was not properly explained in the original version of the submission and have added a new section in Appendix G to correct this. We have also made edits to the main paper to make it clearer that we do not use rejection sampling during training and the high level reasons for this.\n",
" Thank you very much for your thoughtful review and praise of our work. We are glad to see that you are already in favour of acceptance and hope that our response addresses your remaining concerns.\n\n> 1. The work seems to be incremental over [33-35] especially [35]. The authors need to justify how their work differs from the aforementioned references.\n\nWe are very happy to add additional discussion on how our work differs from these papers (note they are now references [32-34] in the updated draft but we use the old numbers below), we will use the extra space available for the camera ready to expand our related work section to do this, based on the additional discussion below.\nPerhaps the biggest distinguishing technical factor to [35] is that our approach is a variational inference approach rather than a Monte Carlo approach, with a lot of technical challenges arising from this distinction. As Reviewer M2CX elegantly puts it: \n“[U]sing variational inference instead of MCMC poses different challenges, which are reasonably addressed in the section 4” (please also see their subsequent bullet points). The practical significance of this change is borne out in our empirical results, where we see that the improved speed and scaling of variational inference leads to very substantial performance gains. We believe our work also has novelty in showing how using a variational family based on SLPs naturally leads to divide-and-conquer style algorithms emerging, due to the resulting separability of the ELBO.\n[33] and [34] both also use the general idea of breaking down programs into SLPs, but both papers consider starkly different problem settings and have very different aims to our own work. [33] consider a restricted programming language which only allows linear arithmetic and only allows “hard” conditioning (i.e. condition on whether a certain condition evaluates to True/False). They therefore consider completely separate types of inference problems. Similarly, the language considered in [34] does not contain any “observe” statements at all and therefore does not consider the task of posterior inference. Neither have any direct link to variational inference.\n\n\n> How does it compare to MCMC VI?\n\nExisting work on MCMC VI has been focused on the setting of static support and we are not aware of any current approaches that can be applied in the stochastic support setting to provide a testable baseline we can compare to. \n\nThe fact that SDVI breaks down the inference problems into sub-problems with static support means that it could provide a useful stepping stone to generalize MCMC VI techniques to the stochastic support setting, but investigating this interesting avenue properly is unfortunately beyond the scope of the current paper.\n\nThere are also some interesting mathematical links between MCMC VI and our use of rejection sampling in the ELBO, we thank you for bringing these up and will add discussion on them to the paper for the camera ready. For example, in cases where the learned $\\tilde{q}_k$ has a very low acceptance rate under rejection sampling, one might be able to use MCMC techniques to more efficiently generate samples from the learned $q_k$ instead. We note though that this is tangential to the common practice of using MCMC to enrich variational families in order to enhance training of the model parameters, which does not assist the learning of the variational approximation itself.\n\n> 2. The paper needs to include more related works and explain the differences and similarities between the proposed method and related works in more detail. There has been a lot going on in this field and it begs a more comprehensive literature review.\n\nWe hope that our answers to your questions above were able to alleviate your concerns around the related work. We plan to use the extra page allowed for the camera-ready submission to provide a more extensive literature review, and we are happy to add further discussions of any particular work that you believe needs to be mentioned as well. \n",
" > In the discussion of Table 1, “MAP estimate of K=6, the local guide q_k corresponding to the SLP with 5 components” - should this be SLP with 6 components?\n\nThank you for highlighting this source of confusion. The “5 components” is actually correct, but we agree this sentence could have been clearer. What we are trying to say is that on the runs where SDVI returns an (incorrect) MAP estimate of $K=6$, this has occured because the local inference for the SLP with $5$ components has failed to train effectively. We have clarified this in the paper.",
" Thank you very much for your careful review and helpful suggestions. We are glad to see that you are already in favour of acceptance and hope that our response addresses your remaining concerns.\n\n> The paper lacks discussions on how expensive the approach is (though it seems that the method is scalable through the IGMM experiment).\n\nGood point, we will add more direct discussion of this in the final version of the paper (utilising the fact we will then have an extra page available). Here we would first like to point out that all our comparisons are cost-normalized—noting that the per-likelihood-evaluation cost of each approach is essentially identical—so the gains shown are in real-time terms. In fact, because SDVI allows parallelization of computation over the different SLPs in a manner that cannot be exploited by conventional VI approaches, our comparisons are thus actually very conservative, as they do not account for the speed ups this parallelization can provide.\n\nThere is naturally a storage overhead for SDVI in that we learn multiple variational approximations instead of just one, but given the memory costs from the variational approximations are generally very low anyway, it will be very rare that this is a problem in practice. SDVI also requires some extra computations, compared with standard VI, in evaluating the ELBOs themselves, required for resource allocation and calculating $q(k;\\lambda)$. However, this can be very directly controlled and easily kept low (we use about 10% of computational budget for this in our experiments), while this cost is already accounted for by the cost normalization in our experiments.\n\n\n> Though the experimental results seem to be correct, it would be more interesting for readers to know in which situations we need to construct a probabilistic program with stochastic support. This might deserve a paragraph in the introduction section.\n\nGreat suggestion, we will happily add this in for the final version of the paper. Many use cases arise from Bayesian Nonparametrics [Orbanz and Teh, \"Bayesian Nonparametric Models,\" Encyclopedia of machine learning 1 (2010); Bloem-Reddy et al. “Sampling and inference for discrete random probability measures in probabilistic programs.” AABI (2017)]. Another common class of problems occur when performing model selection or Bayesian model averaging (or combination), as is often done in automatic data modelling (see e.g. [Saad et al. \"Bayesian synthesis of probabilistic programs for automatic data modeling\" POPL (2019)]). Furthermore, many scientific simulators, as used extensively in the physical sciences for example, define models with stochastic support because they often contain stochastic control flow. Since these simulator models are often very sophisticated and encode domain knowledge which has been accumulated over decades, scientists often want to use these simulator to make parameter inferences [15-17].\n\n> The “rethinking” word in the title seems to create confusion because the main algorithm is only compared to the simple Gaussian approximation.\n\nThe “rethinking” in the title is meant to refer to the fact that we are constructing the guide in quite a fundamentally different way than is usually done for VI in PPLs (cf Section 3). This change is independent of the local form of approximations used for individual sample draws. \nNote that the guides we compare do not exclusively rely on Gaussian approximations (see Appendix D for details). We have made edits to try and make this clearer, but would be open to amending the title if you think it is still a significant source of confusion.\n\n> The mention of Pyro’s AutoGuide in Figure 1 may cause confusion. It might be better to use an explicit algorithm name, e.g. Gaussian variational approximation or the like.\n\nWe agree this was potentially confusing. Another reviewer had an issue with this for a slightly different reason and so we have simply opted to remove this line from Figure 1.\n\n> Based on the program in Figure 1, apparently it requires users to use different variable names (z1, z2) for different paths. Is this a requirement for users?\n\nThis is not a requirement on an algorithmic level, but our current implementation does indeed require that users give different addresses to each lexical sample site in the program if they want them to be treated as separate paths. This is inherited from a deliberate design decision in Pyro to let users assign addresses to each sample site manually, but many other PPLs (e.g. Anglican) will instead automatically assign unique variable identifiers for different paths. We will add discussion on this to the final paper.",
" Thank you very much for your thoughtful review and positive feedback. We respond to your questions individually below.\n\n> Experiments are strong but limited to two low-dimensionality problems.\n\nWe would like to point out that our IGMM experiment deals with far higher dimensionalities than have ever been previously considered for programs with stochastic support. Specifically, the dimensionality of the parameter space $\\{u_1,\\dots,u_K\\}$ for the SLP with $K$ components is $100 * K$, such that some of them have over a $1000$ dimensions. By comparison, the analogous IGMM experiment in the DCC paper [35] only has dimensionality $1 * K$.\n\n> When comparing against sampling-based inference methods, why not include one based on importance sampling? Resample-move SMC inside DCC, for instance, could provide an estimate of the normalizing constant, and therefore (in log space) an ELBO to which to compare.\n\nThe DCC baseline we are comparing to is actually already leveraging a marginal likelihood estimator based on importance sampling (IS), namely PI-MAIS [68] (referenced in the Appendix) which adaptively constructs an importance sampling proposal distribution based on the outputs of the MCMC chains. However, as IS requires overdispered proposals, the ELBO scores for this approach are trivially $-\\infty$, preventing sensible comparison by this metric. We will use the extra space allowed for the camera-ready version to make this clearer in the main paper. \n\nNote also that we did not provide direct comparisons to more conventional IS baselines because the DCC paper had already shown substantial improvements over them. We could still easily add them if you think they are important though. \n\n> I don't understand what's meant by \"marginal likelihood\" for an SLP within a larger program. Not every SLP is going to make observations before reaching a site of stochastic control-flow, and so your importance weight could just be a prior/proposal density ratio. Is that ratio's expectation used as a normalizing constant to allocate compute power? \n\nGood question! While not every SLP will have observations, the boundaries of the SLP always place constraints on the density function. Specifically, $\\gamma_k$ always contains the term $\\mathbb{I}[x \\in \\mathcal{X}_k]$ which in itself is a form of conditioning as well. Therefore, inferring the boundaries of a given SLP can therefore be interpreted as its own inference problem as well, with the $\\gamma_k$ not being normalized densities even if we make no observations in that SLP. The normalizing constant in such scenarios is simply the expectation of $\\mathbb{I}[x \\in \\mathcal{X}_k]$ under the forward generative model, that is the probability of a prior sample being on that path. Therefore, it is still necessary to train a $q_k$ for these SLPs and allocate resources according to their normalization constant appropriately. We will add a comment about this to the final version of the paper.\n",
" We would like to thank the reviewers for their thoughtful reviews which have all been clearly written with significant effort and care. We are delighted that all reviewers have backed acceptance and gave almost universally strong subscores on Soundness, Presentation, and Contribution. Further, we gladly notice that the reviewers appreciated our “convincing” (Reviewer NHpY) and “comprehensive” (Reviewer A7EP) empirical results that “clearly set a new state of the art” (Reviewer Efev); believe that “the paper addresses a very important” (Reviewer A7EP), “difficult” (Reviewer M2CX), and “significant” (Reviewer Efev) problem; provides a “neat” (Reviewer NHpY) method which “seems to be a nice and useful addition to PPLs” (Reviewer A7EP) and which is “widely applicable and significant to a broad variety of downstream problems” (Reviewer Efev); and found the paper to be “very well written and easy to understand” (Reviewer A7EP).\n\nWe respond to the questions of each reviewer individually below. We would again like to thank the reviewers for their helpful suggestions, which we will happily incorporate and which we believe will further strengthen the paper. We have already uploaded an updated version of the paper, but please note that we will only be able to provide some of the planned updates in the final version of the paper, when we can utilise the fact that an additional page will be allowed (unlike now).\n",
" This paper presents a new way to automatically construct and train variational families for models expressed as general probabilistic programs. \n\nGiven a program $p$, it first generates many (prior) samples from $p$ to identify a collection of control flow paths that collectively have high mass under the prior. Then, a separate variational family is trained to target the restriction of the model to each control flow path. During training, control-flow paths with low ELBOs are periodically pruned, so that computation is focused on paths that appear to have relatively high posterior probability. Finally, the learned variational families are combined into a mixture, with mixture weights based on estimated ELBOs for each path. \n\nExperiments show competitive performance against other fully automated inference strategies for probabilistic programs, on three inference problems. Strengths:\n\n1. Convincing empirical results, showing that for a similar computational budget, the new method outperforms a few existing fully automatic inference methods.\n2. The paper grows the body of evidence that extracting SLPs and tackling inference separately in each (as done e.g. in DCC) is a useful technique for fully automated inference in universal PPLs.\n3. In models with discrete latent variables that influence control flow, gradient estimators are often based on REINFORCE, and a variant of the \"explore/exploit\" problem from reinforcement learning arises: without any gradient signal helping it to explore other branches of the control flow graph, the variational family may settle into 'exploiting' one suboptimal branch. This work presents a nice way of introducing some kind of 'exploration', by allocating some training time up front to many different branches, and pruning only if they do not look promising. This is a neat way of getting around the high variance of the REINFORCE estimator. (It's a bit similar to the strategy of exactly marginalizing the discrete variable, except maybe a bit less expensive in the long run, because if one path is not promising, it is pruned early.) \n4. An implementation in Pyro is provided, which increases the likelihood that this work will have real impact for practitioners.\n\nWeaknesses:\n\n1. In Section 4.5, rejection sampling the guide changes the variational family's density, as the authors write, introducing a \"Z\" term that measures the probability that the guide generates a sample that lies within its assigned SLP. But this Z term, which depends on the guide's parameters and so must be considered during optimization, appears to be intractable, and the authors do not explain how it is computed when estimating the ELBO and the ELBO's gradient. If it is ignored, I am worried that the gradient estimates would be biased. Furthermore, it doesn't seem this feature is 'stress-tested' by the experiments, the latter two of which use guides that ensure no rejection is necessary.\n\n2. I did not see any clear discussion of the method's limitations. At times, I thought the authors made misleading claims that downplayed limitations. For example, L176 says of the SLP discovery process that \"experiments show it can reliably identify all SLPs with non-negligible posterior mass.\" But my understanding is that you only discover SLPs with non-negligible _prior_ probability mass; in all the experiments, the posterior was concentrated in SLPs likely under the prior. Consider fitting the Gaussian mixture model to a dataset with many more components than the prior expects. The posterior will concentrate far from the prior, in an SLP that may not even have been discovered by the algorithm. Of course, this is a difficult case for many inference algorithms, but the limitation should be clearly explained. \n\n3. I find Figure 1 and its explanation on L99-110 misleading. The explanation makes it sound as though the problem in the example is the expressiveness of the default variational family (it cannot represent the posterior on x, due to stochastic control flow). But the striking behavior in the figure, where Autoguide fails, arises just because you are using the reparameterization trick in a non-reparameterizable model. This seems to be a totally separate question from the expressiveness of variational families: it's about whether the Pyro software happens to detect when to switch to the score function gradient estimator automatically. Unless your method *does* automatically detect when the reparameterization trick can or cannot be soundly applied, without any user intervention, this feels irrelevant to the paper. It reads as though a (predictable) failure of the gradient estimator is being passed off as a limitation of the variational family. This also applies to experiments that report results using biased gradient estimators—to me that feels like an orthogonal issue.\n\n4. I don't think the introduction (e.g. L24-42) gives an accurate summary of the state of the art. It is true that many PPLs, including those listed here, aim to support variational inference, but the aim has *not* in general been to automate the construction of guides (although of course, that is an interesting goal). Although Pyro has an Autoguide feature, many of the other PPLs in the list do not: the *intended* use case is that a user constructs a model *and* a guide as probabilistic programs, and the system automates gradient estimation and optimization of the ELBO. (Thus, L92-98 are also a bit misleading.) Even in Pyro, which does support automatic guide construction, many of the tutorials show how to construct custom guides, and the flexibility to do so is touted as a key feature. Arguably, this is what makes languages like Pyro useful for a much broader variety of problems than earlier languages that attempt to fully automate inference. I like that this paper tackles the hard problem of fully automated inference, and I think it makes progress in that direction, but the scope of the problem within the broader field of probabilistic programming should be clearly described: many PPLs already support accurate variational inference (with correct gradient estimates) in models with stochastic control flow if the user provides a sensible variational family. This paper is attempting to further automate the inference process by automatically constructing guides for a broader class of models than existing autoguide techniques can handle.\n\n\nSmall nitpick: on line 63, I don't think \"higher-order\" is relevant -- a first-order language with branching and recursion yields the same class of models. 1. In Section 4.5, you describe two methods for resolving the issue that local guides may not 'stay within' the boundaries of their SLPs. The way I read it, you are applying both solutions (mixing the target with a constant density, and rejection sampling the guide)---but isn't only one necessary? If you are rejection sampling, why introduce unnecessary discontinuities into the target?\n\n2. I am confused about how the rejection sampling in Section 4.5 is supposed to work. As you write, the density of the variational family now contains a \"Z\" term, equal to the probability that q produces a sample that is accepted. This Z term must be computed when estimating the ELBO and its gradient. (Though the notation does not make this clear, Z depends on the parameters phi, so Z cannot simply be dropped when training.) How do you compute Z and its gradient, which in general will be an intractable integral over the SLP's region (which could be strangely shaped)?\n\n(Note: my score is only \"borderline accept\" because I am not sure I've understood the method / if I have I am not sure it's sound, and also because I have concerns about certain misleading aspects of the current presentation, described in \"Strengths and Weaknesses.\" If these are addressed during the author response, I hope to revise my score to a more confident recommendation.)\n\n**EDIT AFTER DISCUSSION:** The author response has addressed my concerns and I am increasing my score to a 7. As stated in \"Weaknesses,\" I think the limitations are not adequately discussed. The class of probabilistic programs in universal PPLs is huge: when should we expect this algorithm to work, and when should we expect it to fail? i.e., when might users still resort to custom guides in Pyro? What assumptions does the new method make (e.g., non-negligible posterior mass on an SLP implies non-negligible prior mass)?",
" The paper proposes a new variant of variational inference for probabilistic programs with stochastic support. To address the challenges stochastic support poses, this paper proposes a novel way of constructing a variational guide by breaking down the problem into sub-programs. The paper demonstrates that this approach results in improvements in inference performance. \nStrengths: \n1) The paper addresses a very important and useful problem and the proposed method seems to be a nice and useful addition to probabilistic programming languages. I especially appreciate that the code is made available.\n2) The paper is very well written and easy to understand\n3) The paper contains comprehensive experiments to demonstrate the effectiveness of the method.\n\nWeakness: \n1) The work seems to be incremental over [33-35] especially [35]. The authors need to justify how their work differs from the aforementioned references.\n\n2) The paper needs to include more related works and explain the differences and similarities between the proposed method and related works in more detail. There has been a lot going on in this field and it begs a more comprehensive literature review. - Could authors justify the main differences between their work and that of for instance [35]?\n- How does it compare to MCMC VI? - This work seems a little incremental thus some justification is required to distinguish between this work and others. \n- Choosing a variational family is always a crucial task when doing VI. I am not sure how it is compared to the MCMC VI models. ",
" The paper proposes a variational inference method named Support Decomposition Variational Inference (SDVI) that targets probabilistic programs with stochastic support. The main idea is to perform variational inferences separately on sub-programs with static support. The variational approximation has a mixture form where each variational component is optimized independently for each sub-program. Finally, the approach is compared to baseline algorithms on a Gaussian model with stochastic control flow, on an infinite Gaussian mixture model (IGMM) with a stochastic number of clusters, and on inferring the kernel structure of a Gaussian Process. The paper is clearly written and the structure is easy to follow. I don’t have problems getting the main ideas.\n\nThe paper tackles the universality perspective in probabilistic programming systems. This seems to be a difficult problem. Though the main idea follows the “Divide, Conquer, and Combine” approach proposed by Zhou et al. [35], using variational inference instead of MCMC poses different challenges, which are reasonably addressed in the section 4:\n+ Using the Successive Halving algorithm to avoid wasting computational resources on less interesting sub-programs.\n+ Truncating the target density by a small positive value to ensure obtaining a finite ELBO (when a variational component proposes samples outside of the support of its corresponding sub-program).\n+ Variational components need to be initialized smartly so that they have sufficiently large probability mass on the support of the corresponding sub-programs.\n\nDespite that the authors already provide an implementation in Pyro, I feel that it is a bit tricky for practitioners to adopt the approach due to the complication of the above challenges.\n\nThe paper lacks discussions on how expensive the approach is (though it seems that the method is scalable through the IGMM experiment). This is important for users who want to apply the method in practice.\n\nIn addition, though the experimental results seem to be correct, it would be more interesting for readers to know in which situations we need to construct a probabilistic program with stochastic support. This might deserve a paragraph in the introduction section. The “rethinking” word in the title seems to create confusion because the main algorithm is only compared to the simple Gaussian approximation.\n\nThe mention of Pyro’s AutoGuide in Figure 1 may cause confusion. It might be better to use an explicit algorithm name, e.g. Gaussian variational approximation or the like.\n\nIt is not clear to me how we can differentiate between different paths of a program. Based on the program in Figure 1, apparently it requires users to use different variable names (z1, z2) for different paths. Is this a requirement for users?\n\nIn the discussion of Table 1, “MAP estimate of K=6, the local guide q_k corresponding to the SLP with 5 components” - should this be SLP with 6 components? Nothing to report.",
" The authors address the problem of designing variational proposals for universal probabilistic programs with stochastic support. Their method breaks down the stochastic support of the target program into a mixture model over control-flow paths through a space of straight-line programs, and the authors explain that their variational proposal method contrasts with Divide-Conquer-Combine (its closest cousin, used for MCMC sampling). They demonstrate their method's performance on a toy program, an infinite mixture model, and learning Gaussian process kernels in a PCFG. Their results appear to clearly set a new state of the art in log-predictive density and ELBO. Strengths:\n* Variational inference in deep universal PPLs is widely applicable and significant to a broad variety of downstream problems\n* Clearly superior experimental results, with 100*K dimensionalities\n* Latter two experiments are clearly nontrivial for variational inference settings\n* Color-coding provides clarity\n* Explanations and mathematics are clear to PPL readers\n\nWeaknesses:\n* Explanations and mathematics will be unclear to readers without probabilistic ML background When comparing against sampling-based inference methods, why not include one based on importance sampling? Resample-move SMC inside DCC, for instance, could provide an estimate of the normalizing constant, and therefore (in log space) an ELBO to which to compare.\n\nI don't understand what's meant by \"marginal likelihood\" for an SLP within a larger program. Not every straight-line subprogram is going to make observations before reaching a site of stochastic control-flow, and so your importance weight could just be a prior/proposal density ratio. Is that ratio's expectation used as a normalizing constant to allocate compute power?\n\nThe authors have now clarified and explained away my previous concerns. The authors, peculiarly for a Pyro paper, make no actual use of neural networks to represent conditional densities."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
5
] | [
"86sQmhMJLzm",
"H53vUFd2Swq",
"Ed6I8MqQ5h",
"Ed6I8MqQ5h",
"0FhZ0LvnW_Z",
"LGkISGLHysv",
"oGETq0ZD_P0",
"m3SttBGt3EB",
"PgtWuAF-YT",
"ypDiqxlDQWX",
"nips_2022_wjClgX-muzB",
"8QgoRWkQNaF",
"eS9LdC6xNCW",
"ZgkiugRDayY",
"Ip8iKEAWb66",
"ypDiqxlDQWX",
"AlhZVqR5mIv",
"YJok-jP1WDH",
"nips_2022_wjClgX-muzB",
"nips_2022_wjClgX-muzB",
"nips_2022_wjClgX-muzB",
"nips_2022_wjClgX-muzB",
"nips_2022_wjClgX-muzB"
] |
nips_2022_rwdpFgfVpvN | Online Convex Optimization with Hard Constraints: Towards the Best of Two Worlds and Beyond | This paper considers online convex optimization with hard constraints and analyzes achievable regret and cumulative hard constraint violation (violation for short). The problem distinguishes itself from online convex optimization with soft constraints, where a violation at one round can be compensated/cancelled by a conservative decision at a different round. We propose a RECtified Online Optimization algorithm (RECOO) and consider two settings: fixed constraints and adversarial constraints. Both settings have been considered in the literature. Compared with existing results, {\em RECOO achieves the best of two worlds and beyond.} For the fixed-constraints setting, RECOO achieves $O\left(\sqrt{T}\right)$ regret and $O(1)$ violation, where $T$ is the learning horizon. The best known results in this case are $O(\sqrt{T})$ regret and $O\left(T^{1/4}\right)$ violation. For the adversarial-constraints setting, it guarantees $O(\sqrt{T})$ regret and $O(T^{3/4})$ violation, which match the best existing results. When the loss functions are strongly convex, RECOO can guarantee $O(\log T)$ regret and $O(1)$ violation for fixed constraints, and $O(\log T)$ regret and $O(\sqrt{T\log T})$ violation for adversarial constraints. Both these results are order-wise better than the existing bounds. The regret and violation bounds mentioned above use the best fixed decision in hindsight as the baseline. This paper further considers a dynamic baseline where the comparator sequence is time-varying. This paper shows that RECOO not only improves the existing results in the fixed-constraints setting but also {\em for the first time,} guarantees dynamic regret and violation bounds in the adversarial-constraints setting. Our experiment results confirm that RECOO outperforms several existing algorithms for both fixed and adversarial constraints. | Accept | This paper provides an algorithm for online convex optimization with varying unknown constraints. Reviewers agree that the methods involved appear novel and interesting. However, the authors are strongly encouraged to add a discussion of the computational complexity of the method, which may provide the missing tradeoff for the currently free $\epsilon$ parameter. | train | [
"bB7c_FifyoH",
"2iOlBxmu4-e",
"DIWbuvLbCTc",
"RTstfRaiRIf",
"EPb8jV-nCVe",
"PITGJQ8hvsm",
"X_KdTnquHbS",
"ahj-9Xtgd29",
"rYTFUuGaJ-",
"K8lM4H8dAu",
"7aSayy5W-I6",
"0xu4iq-fTeY",
"Iah9IOLI-9"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nWe appreciate your detailed comments again. If you have any further questions, please let us know so we can address them before the rebuttal phase ends. Thank you very much for your time!",
" Dear Reviewer DX8X: \n\nSince it has been a few days that the author-reviewer discussion period started, we were wondering whether you have any additional questions/comments about our response to your review comments. We greatly appreciate your time and comments and will do our best to address your concerns. ",
" **Comparison to [18] and [30]:** The design principle and proof techniques of our algorithm are different from [18] and [30]. For the design principle, our algorithm leveraged the idea of penalty-based proximal optimization to handle the constraints directly; and [18] and [30] leveraged the idea of primal-dual optimization and used the dual variables/virtual queues as the proxy of the constraint violation. For the proof techniques, our techniques quantify the \"regret + violation\" as a whole and establish the regret and (hard) constraint violation directly, which is unlike the Lyapunov drift methods in [18] and [30] that quantify the constraint violation indirectly by bounding the dual variables/virtual queues.\n \n**Questions:** \n\nQ1) We tried to apply the expert-tracking technique in the adversarial constraints, and it did not improve the theoretical results. The main challenge is that the constraint violation could be uncontrollable large with the expert-tracking technique, which we currently cannot overcome. It definitely would be an interesting problem to be investigated further. \n\nQ2-Q3) We have added the missing assumptions in Assumptions 2 and 3, and will re-organize the paper according to your suggestions.\n\n--------------------------- reference in our previous version --------------------------------\n\n[9] Elad Hazan. Introduction to online convex optimization. Foundations and Trends® in Optimization, 2016.\n\n[18] Michael J Neely and Hao Yu. Online convex optimization with time-varying constraints. arXiv preprint arXiv:1702.04783, 2017\n\n[28] Hao Yu, Michael Neely, and Xiaohan Wei. Online convex optimization with stochastic constraints. Advances in Neural Information Processing Systems, 2017. \n\n[30] Hao Yu and Michael J. Neely. A low complexity algorithm with $O(\\sqrt{T})$ regret and $O(1)$ constraint violations for online convex optimization with long term constraints. The Journal of Machine Learning Research, 2020.",
" We sincerely thank the reviewer for the encouraging and constructive comments. We'd like to incorporate these great suggestions in our revision and response the major concern in the following.\n\n**Experiment of online job scheduling in a distributed data center:** We tested our algorithm with the experiment of online job scheduling in a distributed data center similar as in [28]. We study a distributed data center infrastructure with server clusters located at different regions. The incoming jobs arrive at a front-end load balancer and will be scheduled to different clusters to fulfill the service. The service capability of a cluster is a function w.r.t. its energy consumption, and the energy prices vary across locations and times. The goal is to minimize the energy cost while guaranteeing real-time processing for safety-critical jobs. This problem can be formulated as a constrained online convex optimization problem and solved by our algorithm as follows.\n\nSpecifically, we consider $r=10$ regions, each region has 10 clusters, and each time/round $t$ has 5 minutes. Let $x_t\\in \\mathbb R^{100}$ be the energy allocation vector of all clusters at round $t$, where the $i$th entry is the energy allocation of cluster $i$. Let $f_t(x_t) = \\langle c_t, x_t \\rangle$, where $c_t \\in \\mathbb R^{100}$ are the energy prices at time $t$. Let $g_t(x_t) = \\lambda_t - h(x_t)$, where $\\lambda_t$ is the number of job arrivals during time $t$ and $h(x_t)$ is the total service capacity of the data center at time $t$. The constraint violation represents the number of delayed jobs (jobs not severed in real-time). In the experiment, we use a 10-days electricity price trace (i.e., $\\{c_t\\}$) by extracting 10 regions from New York city. We calibrated job arrivals of a realistic traffic pattern with a non-stationary Poisson process $\\{\\lambda_t\\}$ to replace the stationary traffic studied in [28]. \n \nWe consider [27] and [28] in the paper as our baselines. We show the average energy costs (the first table) and constraint violation (the second table) w.r.t. time in the following. It is verified that our algorithm achieves better performance on the loss and constraint violation compared to [27] and [28]. Please find the figures (Figure 3) for a better view in Appendix G of our revision. \n| $T$ | 500 | 1000 | 1500 | 2000 | 2500 | 2880 |\n|:---------------------:|:---------:|:---------:|:--------:|:--------:|:--------:|:--------:|\n| Our Algorithm | 3968.823 | 3854.507 | 5206.972 | 4998.419 | 4948.378 | 5657.571 |\n| Algorithm 1 in [27] | 18102.464 | 10764.099 | 9302.034 | 7818.307 | 6807.092 | 6662.214 |\n| Algorithm 1 in [28] | 12704.005 | 8430.739 | 7806.612 | 7723.638 | 6989.628 | 6860.589 |\n\n| $T$ | 500 | 1000 | 1500 | 2000 | 2500 | 2880 |\n|:---------------------:|:------:|:------:|:------:|:------:|:------:|:------:|\n| Our Algorithm | 18.361 | 15.282 | 12.968 | 11.316 | 10.547 | 9.668 |\n| Algorithm 1 in [27] | 35.872 | 38.908 | 41.721 | 44.463 | 47.358 | 46.444 |\n| Algorithm 1 in [28] | 19.507 | 12.968 | 19.840 | 19.341 | 19.096 | 19.977 | \n\n**Clarification on Slater's condition:** The Slater's condition is *\"there exists a positive $\\epsilon>0$ and $x\\in \\mathcal X$ such that $g_t(x) \\leq -\\epsilon, \\forall t \\in [T]$\"* The Slater's condition is necessary to improve the soft constraint violation in [18, 28, 30]. The key improvement in [18, 28, 30] is that the Lyapunov drift technique can provide a refined bound on virtual queues (or dual variables) with Slater's condition, which thus achieves a smaller soft constraint violation. The reason we did not make such an assumption is we can quantify the \"regret + violation'' as a whole, from which we can establish the regret and (hard) constraint violation directly, which is unlike the Lyapunov drift methods in [18, 28, 30] that quantify the constraint violation indirectly by bounding the dual variables/virtual queues. \n \n**Proper benchmark:** The current benchmark is the best fixed decision in hindsight, which minimizes the total costs while satisfying the constraints at each round. It serves as a classical benchmark in (constrained) online convex optimization [9, 18, 28, 30] and has wide applications (e.g., prediction with expert advice or recommendation systems). The mentioned benchmark $\\sum_{t=1}^T g^{+}_t(x) \\leq C$ is interesting and more challenging. For the case of $C=0$, it reduces to the classical benchmark studied in the paper. For the cases of $C > 0$ or even $C$ is a function of $T$, the baseline is strong and adaptive, which potentially introduces large regret for any casual online learning algorithm. This baseline has not been investigated in the literature before and we will definitely look into this problem in the future. \n ",
" --------------------------- reference in our previous version --------------------------------\n\n[26] Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Tianyou Chai, and Karl Johansson. Regret and cumulative constraint violation analysis for online convex optimization with long term constraints. In International Conference on Machine Learning, 2021.\n\n[27] Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Tianyou Chai, and Karl H Johansson. Regret and cumulative constraint violation analysis for distributed online constrained convex optimization. arXiv preprint arXiv:2105.00321, 2021.\n\n[28] Hao Yu, Michael Neely, and Xiaohan Wei. Online convex optimization with stochastic constraints. Advances in Neural Information Processing Systems, 2017. \n\n[30] Hao Yu and Michael J. Neely. A low complexity algorithm with $O(\\sqrt{T})$ regret and $O(1)$ constraint violations for online convex optimization with long term constraints. The Journal of Machine Learning Research, 2020.",
" We greatly appreciate the reviewers' constructive comments and positive evaluation towards the novelty of this paper. We'd like to response to the major comments in the following.\n\n**Explanation on why our algorithm achieves a better bound:** Intuitively, the reason that our algorithm achieves a better bound lies in the rectified design of $Q(t)$ that imposes a minimum penalty at each round such that our algorithm avoids using overly aggressive actions. Unlike the traditional primal-dual based methods [26-28] and [30], it might choose aggressive actions that violate the constraints when the ''dual penalty'' is small. Technically, with the rectified design of $Q(t)$ and the clipped constraint function $\\hat g^{+}(\\cdot)$, our algorithm provides an upper bound on the ''regret + violation'' at any time $t$, i.e., $f_t(x_t) - f_t(x^*) + Q(t)\\hat g^{+}_t(\\cdot)$. It quantifies the ''regret + violation'' as a whole and we are able to establish regret and (hard) constraint violation directly. These techniques again are different with the traditional methods in [26-28] and [30] that quantify the constraint violation indirectly by bounding the dual variables/virtual queues. We have emphasized the key technical contributions in the revision. \n \n**Experiment of online job scheduling in a distributed data center:** We tested our algorithm with the experiment of online job scheduling in a distributed data center similar as in [28]. We study a distributed data center infrastructure with server clusters located at different regions. The incoming jobs arrives at a front-end load balancer and will be scheduled to different clusters to fulfill the service. The service capability of a cluster is a function w.r.t. its energy consumption and the energy prices vary across locations and times. The goal is to minimize the energy cost while guaranteeing real-time processing for safety-critical jobs. This problem can be formulated as a constrained online convex optimization problem and solved by our algorithm as follows.\n\nSpecifically, we consider $r=10$ regions, each region has 10 clusters, and each time/round $t$ has 5 minutes. Let $x_t\\in \\mathbb R^{100}$ be the energy allocation vector of all clusters at round $t$, where the $i$th entry is the energy allocation of cluster $i$. Let $f_t(x_t) = \\langle c_t, x_t \\rangle$, where $c_t \\in \\mathbb R^{100}$ are the energy prices at time $t$. Let $g_t(x_t) = \\lambda_t - h(x_t)$, where $\\lambda_t$ is the number of job arrivals during time $t$ and $h(x_t)$ is the total service capacity of the data center at time $t$. The constraint violation represents the number of delayed jobs (jobs not severed in real-time). In the experiment, we use a 10-days electricity price trace (i.e., $\\{c_t\\}$) by extracting 10 regions from New York city. We calibrated job arrivals of a realistic traffic pattern with a non-stationary Poisson process $\\{\\lambda_t\\}$ to replace the stationary traffic studied in [28]. \n \nWe consider [27] and [28] in the paper as our baselines. We show the average energy costs (the first table) and constraint violation (the second table) w.r.t. time in the following. It is verified that our algorithm achieves better performance on the loss and constraint violation compared to [27] and [28]. Please find the figures (Figure 3) for a better view in Appendix G of our revision. \n| $T$ | 500 | 1000 | 1500 | 2000 | 2500 | 2880 |\n|:---------------------:|:---------:|:---------:|:--------:|:--------:|:--------:|:--------:|\n| Our Algorithm | 3968.823 | 3854.507 | 5206.972 | 4998.419 | 4948.378 | 5657.571 |\n| Algorithm 1 in [27] | 18102.464 | 10764.099 | 9302.034 | 7818.307 | 6807.092 | 6662.214 |\n| Algorithm 1 in [28] | 12704.005 | 8430.739 | 7806.612 | 7723.638 | 6989.628 | 6860.589 |\n\n\n| $T$ | 500 | 1000 | 1500 | 2000 | 2500 | 2880 |\n|:---------------------:|:------:|:------:|:------:|:------:|:------:|:------:|\n| Our Algorithm | 18.361 | 15.282 | 12.968 | 11.316 | 10.547 | 9.668 |\n| Algorithm 1 in [27] | 35.872 | 38.908 | 41.721 | 44.463 | 47.358 | 46.444 |\n| Algorithm 1 in [28] | 19.507 | 12.968 | 19.840 | 19.341 | 19.096 | 19.977 | \n\n**The possible reasons on the small violation in Figure 1:** The possible reasons could be the low dimension of decision variables and the relatively static and loose constraints, which make the baseline algorithms quickly adapt to the violation as well.\n",
" --------------------------- reference in our previous version --------------------------------\n\n[9] Elad Hazan. Introduction to online convex optimization. Foundations and Trends® in Optimization, 2016.\n\n[15] Mehrdad Mahdavi, Rong Jin, and Tianbao Yang. Trading regret for efficiency: online convex optimization with long term constraints. The Journal of Machine Learning Research, 2012.\n\n[26] Xinlei Yi, Xiuxian Li, Tao Yang, Lihua Xie, Tianyou Chai, and Karl Johansson. Regret and cumulative constraint violation analysis for online convex optimization with long term constraints. In International Conference on Machine Learning, 2021.\n\n[30] Hao Yu and Michael J. Neely. A low complexity algorithm with $O(\\sqrt{T})$ regret and $O(1)$ constraint violations for online convex optimization with long term constraints. The Journal of Machine Learning Research, 2020.\n\n------------------------------- new reference ------------------------------------------------\n\n[R1] Elad Hazan and Satyen Kale. Projection-free online learning. In Proceedings of International Conference on Machine Learning, 2012.",
" The reviewer's main concern is whether the proposed algorithm has any advantage against the projection-based methods. We'd like to address your concern from the following aspects.\n\n**Projection-based methods do not work for unknown constraints:** Projection-based methods need to know the constraint functions before decisions are made so that a safe action can be selected from the feasible set. However, in the setting we consider, the constraint function of round $t$ is revealed to the learner only after action $x_t$ is taken, so it is impossible to project an action to the unknown feasible set. Surprisingly, sublinear regret and constraint violation can still be established by using ''old'' constraint functions, and can be achieved using our unified algorithm. \n\nWe do want to apologize for the confusing terminology of \"fixed (known) constraints\" in the current version. The setting should be fixed but *unknown* constraints. Since RECOO is designed for unifying constraints and does not need to know whether the constraints are fixed or adversarial, the two settings are just for analysis purposes where we show that for fixed constraints, RECOO improves the state-of-the-art result without explicitly utilizing the fact that the constraints are fixed. Again, without knowing that the constraints are fixed apriori, it is hard to construct a feasible set apriori. We have removed the word ''known'' to avoid confusion in our revision and added more explanation on the ''fixed'' constraints. \n\n**Our algorithm is more efficient than the projection-based method:** As the reviewer pointed out, the projection operation in general has very high computational complexity, which is the main reason that motivates the projection-free methods, e.g., online Frank-Wolfe algorithm (see Chapter 7 in [9] or [R1]) and our related work in [26] and [30]. Note online Frank-Wolfe algorithm approximates the projection operator at each round by a linear programming with the exact same constraint functions. \n \nOur algorithm is more efficient than the projection-based algorithm and has the same complexity with [26] and [30] (note [30] is a low complexity algorithm as the title suggested). In our algorithm, we only need to solve an \"almost\" unconstrained optimization problem ($\\mathcal X$ is usually a simple set like the box constraints). Therefore, the gradient-based methods are sufficient to find the minimizer or we might even find its close-form with the inverse operation of the function by taking ''gradient = zero''. In other words, our algorithm v.s. projection-based method is similar to unconstrained optimization problem v.s. constrained optimization problem. \n \nBesides, our algorithm is closely related to the proximal optimization method, which solves $\\min_{x} (f(x) + g(x))$ with the proximal gradient method $x_{t} = Prox_{g} (x_{t-1} - \\nabla f(x_{t-1})/\\alpha)$ where $Prox_{g}$ is the proximal mapping of function $g$. The update of $x_t$ is equivalent to $x_{t} = argmin_x(g(x) + f(x_{t-1}) +\\langle \\nabla f(x_{t-1}), x-x_{t-1}\\rangle + \\frac{\\alpha}{2} ||x - x_{t-1}||^2)$. In the online counterpart, we aim to minimize $f_t(x) + g_t(x)$ at time $t$. Since $f_t(x)$ and $g_t(x)$ are unknown when making decision at time $t$. Our algorithm approximates them with ''old'' functions $f_{t-1}(x) + \\hat g^{+}_{t-1}(x)$. Interestingly, we show this design can achieve both sublinear regret and constraint violation for both fixed constraints and adversarial constraints. Since the proximal operator is applied to the old constraints and the constraints may be adversarial, the analysis is highly nontrivial. We have clarified this connection in the revision. \n\n**Discussion on [15]:** We thank the reviewer for pointing out the reference [15]. [15] is for soft constraints where the primal-dual optimization leads to efficient algorithms. As far as we know, for hard constraints, our algorithms have similar computational complexity as the existing algorithms (e.g., [30]) and more efficient than the projection-based methods.\n\nWe hope the response above addresses the reviewer's concern and we sincerely hope the reviewer will reevaluate the paper based on our response. ",
" We would like to sincerely thank the reviewer for the very encouraging comments. We have included more baseline algorithms such as the projected online gradient descent algorithm and online Frank-Wolfe algorithm and emphasized the advantages of RECOO over them in our revision.",
" The problem focus on an online convex optimization problem in which the learner wants to minimize the regret and at the same time does not want to violate some (known or unknown) constraints too often. In particular, the work focus on hard constraints, i.e., such that a violation at one round cannot be compensated by a strictly feasible decision at a different round. They consider a setting with fixed known constraints, providing a regret bound of $O(\\sqrt{T})$ and a violation of $O(1)$. For the setting in which the constraints are adversarial, they provide a regret bound of $O(\\sqrt{T})$ and a constraint violation of $O({T}^{3/4})$. Moreover, they consider the case with strictly convex functions, proving better bounds. The paper studies a well known important problem.\nThe paper is well written and provides a detailed comparison with previous works.\nThe paper provides new techniques that are useful to obtain better regret and violation bounds to the problem.\n\nI have a huge concern about the significance of the result, at least for the setting with fixed constraints.\nIn particular, the problem can be trivially solved considering the convex decision space in which the constraint is satisfied and using a standard regret minimizer on this set. However, this approach usually requires to project on the feasible action space.\nThe main reason to consider a setting with relaxed constraints (soft or hard) is that in this setting is it possible to design very efficient algorithms that do not require projection on complex set (see for instance the abstract of [15]).\n\nHowever, your algorithm does not seem efficient. The step Rectified decision in the Algorithm RECOO requires to compute a minimization over a function that includes a generic convex function $\\hat g^+_{t-1}(\\cdot)$.\nMoreover, it seems that your approach is very related to standard online convex optimization problems in which the constraint g cannot be violated. In particular, if I look at your result, the constant $\\epsilon$ appears only in the denominator of the regret and violation bounds. So the optimal choice for $\\epsilon$ should be $\\infty$. In this case $\\gamma_t$ is very large and $\\hat g^+_t(x)\\rightarrow \\infty$ when $g^+(x)$ is positive. This is a well known equivalent way to represent classical online convex optimization problems.\n\nPost-rebuttal update: I'm partially convinced by the answers given by the authors. I agree that the algorithm proposed by the authors has a lower complexity w.r.t. standard projection based algorithms. I think that this is one of the main motivation of the work and should be highlighted. I'm still convinced that if we ignore the computational complexity motivation the problem can be solved using standard online convex optimization methods. Indeed, your method reduces to classical online convex optimization when $\\epsilon$ goes to $\\infty$. I think you should highlight the role of $\\epsilon$ and trade of between performances and regret. Finally, I agree with the authors that the analysis is non-trivial for adversarial constraints. Let me know if I miss something about the relation with classical online convex optimization problems. Yes",
" This work studies the online convex optimization problem with 'hard' constraints. Here hard constraints mean that the violation from different rounds can not compensate for each other. The authors propose a RECOO algorithm and apply it to both the fixed constraint setting and the adversarial constraint setting. RECOO outperforms previous algorithms by achieving either a better regret or a smaller violation. At the core of RECOO is a 'rectifying' scheme applied to constraint functions. This work also considers a dynamic regret setting and provides corresponding results of their algorithms. Experiment results back up their theoretical findings. Strengths:\n+ The presentation is clear. \n+ The theoretical results are new and important. \n+ The experiments provide a comprehensive view of their algorithms. \n\nWeaknesses:\n+ More baseline algorithms may be mentioned in the summary section. For instance, for the fixed constraints setting, a normal online SGD method with a projection w.r.t. the constraint $g$ is also a baseline method. + I do not have any specific questions. No, the authors suggest 'We didn’t find any major limita375 tions in applying our results to constrained online convex programming.'",
" Online convex optimization (OCO) is a significant part of online learning literature. In real-world applications, there are constraints that need to be satisfied through optimization. This paper theoretically analyzes the constraint OCO under fixed and adversarial settings when the loss is either convex or strongly convex. Unlike the previous works, it considers the hard constraint violation which means that the average constraint violation (CV) is the average of the positive constraint violation and ignores the negative one which can lead to a minima which after a large iteration gives an infeasible solution. It also considers two kinds of regret: 1- static regret and 2- dynamic regret. Under hard constraint assumption, for all of these different settings, it either recovers the existing results for CV or improves it. \n Clarity\n\nThe paper is written clearly with enough explanation for different settings. It explains well the significance of the hard constraint setting. The proofs are presented in an understandable manner. \n\nSince the paper mainly is theoretical, it is not cleared enough why it gets better CV bound w.r.t. existing results. For example, explain which technical method resulted in that bound. \n\nOriginality\n\nConsidering the hard constraint in OCO is novel and also important for real-world problems. \n\nQuality\n\nThe submission is technically sound. I’ve checked the proofs and they look correct and sound. However, the technique they’ve leveraged are not new and have been used in previous works. \n\nSignificance\n\nThe result is important for the real-world problem and besides, they improve the CV bound in a few settings compared to the existing results. \n\n\nLimitation\n\nThe experimental results are just for toy examples. It would illustrate the significance of the proposed method if you have some results on a real-world dataset. \n\n\n\n\n 1- In the experimental result, we see that the CV bound for the existing methods (Fig 1) also shows the constant bound and not sublinearly as claimed in Table 1 or 2. What do you think is the reason for this observation? \n This works addresses the potential social impacts. \n",
" This paper studies an extension of the well-known Online Convex Optimization (OCO) problem where at each round $t\\in[T]$, upon committing to an action $x_t$, both a loss function $f_t$ and also a constraint function $g_t$ are revealed. The goal is to minimize the overall incurred loss $\\sum_{t=1}^T f_t(x_t)$ while minimizing the hard constraint violation $\\sum_{t=1}^T \\max (g_t(x_t),0)$. The authors propose a single algorithm called RECOO that obtains state-of-art bounds for static/dynamic regrets and constraint violation for convex/strongly convex losses. The numerical experiments verify the theoretical findings as well. Strengths:\n- Paper is well-written and the authors have done an excellent job motivating the problem and giving intuitions for their algorithm.\n- The literature review is comprehensive and clearly compares and contrasts related works to this paper.\n- Imposing a time-varying minimum penalty price in the update for $Q(t)$ seems novel and changes the proof techniques for bounding the constraint violation.\n\nWeaknesses:\n- While the experiments clearly highlight the theoretical contributions of the paper, they have been done only for synthetic datasets. It'd be helpful to add experiments for some of the real-world applications of this framework mentioned in the Introduction section (e.g., safety-critical applications).\n- The claim that the algorithm manages to achieve the \"best of two worlds\" is a bit misleading. \"Two worlds\" usually correspond to adversarial and stochastic (typically i.i.d.) constraints, however, in this paper, it refers to adversarial and fixed constraints.\n- In the paper, it needs to be clarified what Slater's condition is, how so many of the prior works have assumed this condition holds to obtain better bounds, and how the framework in this paper does not make such an assumption.\n- The notion of path length $P_T$ has been used several times in the paper before it is finally defined on page 6.\n- The natural static benchmark for this problem should only satisfy the constraint $\\sum_{t=1}^T \\max (g_t(x),0)$, however, in the paper, the benchmark is further restricted to satisfy $g_t(x)\\leq 0~\\forall t\\in[T]$. The authors should explain and motivate this choice of benchmark, and mention any potential hardness results for regret against the more natural static benchmark.\n- The algorithm is quite similar to that of [18] and [30] (putting aside the new idea of rectifying $Q(t)$). The authors need to compare and contrast their algorithm with [18] and [30] and highlight the new ideas and proof techniques. I have already mentioned many of my suggestions and questions earlier. In addition:\n- Does the expert-tracking technique of [33] for obtaining optimal $O(\\sqrt{P_T T})$ regret bounds apply to the setting with adversarial constraints as well? (in the paper, you have only applied to the setting with fixed constraints)\n- The proofs provided in the text are not very insightful. They could be deferred to the appendix and the space could be used for additional experiments with real-world datasets.\n- In the statement of Theorem 1 and Theorem 3, state that the loss functions are assumed to be convex. It is currently missing. The scope of their proposed algorithm has been clearly specified. This theoretical work does not have any potential negative societal impacts and there is no need for addressing it."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
5
] | [
"nips_2022_rwdpFgfVpvN",
"K8lM4H8dAu",
"RTstfRaiRIf",
"Iah9IOLI-9",
"PITGJQ8hvsm",
"0xu4iq-fTeY",
"ahj-9Xtgd29",
"K8lM4H8dAu",
"7aSayy5W-I6",
"nips_2022_rwdpFgfVpvN",
"nips_2022_rwdpFgfVpvN",
"nips_2022_rwdpFgfVpvN",
"nips_2022_rwdpFgfVpvN"
] |
nips_2022_sRKNkpUMQNr | Information-Theoretic GAN Compression with Variational Energy-based Model | We propose an information-theoretic knowledge distillation approach for the compression of generative adversarial networks, which aims to maximize the mutual information between teacher and student networks via a variational optimization based on an energy-based model. Because the direct computation of the mutual information in continuous domains is intractable, our approach alternatively optimizes the student network by maximizing the variational lower bound of the mutual information. To achieve a tight lower bound, we introduce an energy-based model relying on a deep neural network to represent a flexible variational distribution that deals with high-dimensional images and consider spatial dependencies between pixels, effectively. Since the proposed method is a generic optimization algorithm, it can be conveniently incorporated into arbitrary generative adversarial networks and even dense prediction networks, e.g., image enhancement models. We demonstrate that the proposed algorithm achieves outstanding performance in model compression of generative adversarial networks consistently when combined with several existing models. | Accept | This work concerns the compression of generative adversarial networks and other image generation networks, such as dense prediction/image to image networks. Where existing approaches to compress these models rely on matching pairs of outputs, this work optimizes the Barber-Agakov lower bound on the differential mutual information between teacher and student, parameterizing the bound using an energy-based model. Offline distillation alternately fixes the student network parameters and the EBM parameters while optimizing the other, while online distillation can be performed by also including the teacher parameters, holding two of the three fixed while optimizing the third in a "round robin" fashion. The gradient of the EBM partition function is estimated using Langevin dynamics for a fixed number of steps, with chains initialized at student outputs. Qualitative and quantitative results support the assertion that this represents an improvement over existing approaches.
Reviewers found the paper overall quite clear, the method moderately original, the technical details sound, and the work well-situated in the context of other compression methods. Reviewer cQSG in particular praised the "commendable job in ablations against other relevant methods". 6hcq had several concerns around the clarity of the manuscript, which were addressed in rebuttal and hopefully can be incorporated into future versions. Several reviewers had concerns about the scale of experiments; the authors responded in rebuttal with megapixel experiments using StyleGAN2. The AC concurs with cQSG who remarked that these results significantly strengthen the paper.
Reviewer tn1T remained skeptical of the presented qualitative results, even after rebuttal; however, the authors were belatedly able to provide qualitative results on 1024x1024 images in the supplementary material. tn1T did not comment on these results, so I cannot infer how they feel about them, but they appear somewhat convincing to the AC: the samples presented from the VEM-trained do not show exhibit artifacts to the same degree. There is, of course, the issue of cherry-picking, and I would encourage the authors to provide as many of these comparisons as possible at reduced size, highlighting artifacts where they appear but not dropping baseline samples which do not exhibit them (if such samples arise), and highlighting any noticeable artifacts in the images produced by their method.
The AC feels that this work is technically solid, and noting the endorsement of two reviewers and that the qualitative comparison demanded by tn1T was carried out but not acknowledged by tn1T, the case in favour of acceptance outweighs that in favour of rejection. | train | [
"kKYJqg529L",
"vmATO-0ThZi",
"WrWPbIRiw_",
"-7e6HyzqrqB",
"lQ2MRDmUMj",
"OA1l1sSSxAc",
"ao_nSHKTOOp",
"rARv2VY3v4",
"jYbekigQdQ_",
"beQPeAoXx4V",
"GtrgH7agn_7",
"tZUEx2i0H3B",
"J4a6pietHs",
"7Sb95teitJ0"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank all reviewers for their constructive and positive comments and we present the summary of our responses to each reviewer as below. \n\nQ1. Visual quality compared with the previous methods \n\nA. Upon request by Reviewer tn1T, we present more qualitative results of VEM, OMGD, and CAGC in Figure 13, 14, 15 in the new supplementary document (Rebuttal_VEM.pdf). As seen in the presented images, VEM achieves qualitative better results than OMGD and CAGC and tends to generate the images that are visually similar to teacher images, which is also supported by Table A3. Note that we did not run additional experiments to prepare the new supplementary file and chose image pairs (out of the existing results made for our initial submission) that have larger visual quality differences.\n\n\nQ2. Moderate Originality \n\nA. We partially agree with your opinion since we follow the footsteps of prior works for fair comparisons and show the effectiveness of the proposed method when combined with them. However, we would like to emphasize that our key contributions lie in 1) the information-theoretic problem formulation of GAN compression using mutual information and 2) the introduction of EBMs to variational distributions and its successful application to a practical problem. Such ideas have not been addressed before and are technically sound, so we believe that our paper has sufficient novelty.\n\n\nQ3. Additional comparisons with CRD and VID \n\nA. As presented in Table A1, A2, VEM is consistently more effective than VID and CRD and the two methods sometimes perform worse than the baselines, which is also supported by Table 4 of the main paper. \n\n\nQ4. Additional experiments for 1024x1024 resolutions using StyleGAN2\n\nA. We appreciate you for a good suggestion, but it is really hard to perform the experiment you suggested since it takes an even longer time to compress a Stylegan2 for 1024 pixel image generation tasks. We believe that VEM can work well for generative models for 1024 pixels since our method is theoretically validated and not limited to the ones for 256 pixels. We will perform the experiment and will add the results in the final version if our paper is accepted. \n \n\n\nQ5. The size of energy-based model and computational cost of VEM\n\nA. For the size of the energy-based model, it consumes 0.12G MACs for the Horse $\\rightarrow$ Zebra dataset while the model requires 1.95G MACs for other datasets. Also, our method requires an extra 0.5x ∼1.5x training cost depending on datasets and models. Although the presence of MCMC in VEM incurs an extra training cost, the cost can be reduced by decreasing the number of MCMC steps. When the number of MCMC steps is reduced from 10 (used in all experiments) to 5 combined with OMGD, the FID score on the Horse $\\rightarrow$ Zebra dataset is increased from 50.83 to 52.04, which still outperforms the baseline algorithm. Therefore, there is a trade-off between the performance and the training time and we will add the discussions in the final version. \n\nQ6. Is VEM applicable in absence of a distillation method?\n\nA. Yes, VEM is applicable when the previous methods are absent. But, we believe that VEM is more effective when combined with the methods.\n\n\nQ7. Are there specific technical issues preventing the application of this method to more recent models or other large scale image modeling networks used in recent text-to-image generation?\n\nA. Since the outputs are also images in text-to-image generation tasks, our method is easily applicable to the networks without any modifications.\n\nQ8. Presentation issues related to clarification, Figure 1, limitations, and definition of abbreviations\n\nA. We will carefully revise the paper to reflect all comments in the final version. \n",
" Upon request by Reviewer tn1T, we present more qualitative results of VEM, OMGD, and CAGC in Figure 13, 14, 15 in the new supplementary document (Rebuttal_VEM.pdf). As seen in the presented images, VEM achieves qualitative better results than OMGD and CAGC and tends to generate the images that are visually similar to teacher images. Note that we did not run additional experiments to prepare the new supplementary file and chose image pairs (out of the existing results made for our initial submission) that have larger visual quality differences.",
" Thank you for your responses. I have no further questions.",
" Thanks for your response. The added table looks good to me. However, I still think the qualitative results is an issue of the work. Different from discriminative models, quantitative metrics weigh much less to represent practicality of GANs. The paper will get stronger if you can show VEM does push the student to mimic teacher better in image translation, image embedding, and image generation. From Fig.2 and Fig.3, it's still hard to tell how +VEM improves upon OMGD and CAGC.",
" Dear Reviewer tn1T, \n\nBecause the end of discussion period is approaching, we kindly ask you whether our response is helpful to clarify you or not.\n\nAlso, if you have any questions or additional comments, please do not hesitate to contact us.\n\nOnce again, thank you for your time and efforts to review our paper.\n\nBest wishes, \n\nAuthors",
" Dear Reviewer cQSG,\n\nWe appreciate you and will revise the main paper as discussed.\n\nBest wishes,\n\nAuthors",
" These additional experiments enhance and strengthen the core arguments of the paper. Kudos to the authors for completing them in such a tight timeframe.\n\nGiven the applicability of the method, as well as the array of experimental results I have updated my score on the paper. Thanks to the authors for the extra experiments, and answers to questions to myself and the other reviewers.",
" Dear Reviewer cQSG, \n\nThe styleGAN2 experiments using VID and CRD on the FFHQ dataset are just finished and we present the results as below, where we also updated the missing results in Table A2.\n\nAs a result, our method also outperforms the baseline methods.\n\nThank you very much and please let us know if you have any questions. \n\n\n\nTable A3: Performance comparison with CRD and VID on the FFHQ dataset using StyleGAN2 (related to Table 3 of the main paper). \n\n| Dataset | Model | Method | MACs | Compression Ratio | FID ($\\downarrow$) |\n|:----------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|\n| FFHQ | StyleGAN2 | CAGC* | 4.10B | 90.91%| 7.90 |\n| FFHQ | StyleGAN2 | GAGC* + VID | 4.10B | 90.91% | 7.51 |\n| FFHQ | StyleGAN2 | CAGC* + CRD | 4.10B | 90.91% | 7.65 |\n| FFHQ | StyleGAN2 | CAGC* + VEM | 4.10B | 90.91% | 7.48 |\n\n",
" We truly thank you for your constructive and positive comments and below are our responses to the main questions.\n\nQ1. Discussion about the visual quality presented in figure 2\n\nA. We partially agree with you, however our method provides much better quality on the horse $\\rightarrow$ zebra dataset, where our method significantly outperforms OMGD. For other datasets in figure 2, since the performance improvement is relatively smaller, it may be hard to mention that our method bears better visual quality. Also, we clarify that ‘Original’ indicates not a teacher network but a network provided by the official checkpoints given by [A1, A2]. Also, requested by Reviewer tn1T, we measured the perceptual distance using feature reconstruction loss [A3] between the two output distributions of the teacher and student networks for quantitative metrics to demonstrate the effectiveness of VEM. In case of the paired datasets using Pix2Pix, we did not perform the experiments to measure the distance since VEM maximizes the mutual information between the student outputs and ground-truth ones as mentioned in the main paper. As a result, it is empirically validated that VEM does push the outputs given by the student to look similar to the one given by the teacher as presented in Table A3. \n\nTable A3: Averaged perceptual distance [A3] over the validation dataset between the two output distributions of the teacher and student networks using CycleGAN.\n\n| Dataset | Method | Perceptual distance [A3] |\n|:----------------:|:---------------:|:---------------:|\n| Horse $\\rightarrow$ Zebra | OMGD* | 0.3805 |\n| Horse $\\rightarrow$ Zebra | OMGD* + VEM | 0.3721 |\n| Summer $\\rightarrow$ Winter | OMGD* | 0.1843 |\n| Summer $\\rightarrow$ Winter | OMGD* + VEM | 0.1783 |\n\n\nQ2. Additional experiments for 1024x1024 resolutions using StyleGAN2\n\nA. We appreciate you for a good suggestion, but it is really hard to perform the experiment you suggested since it takes an even longer time to compress a Stylegan2 for 1024 pixel image generation tasks. We believe that VEM can work well for generative models for 1024 pixels since our method is theoretically validated and not limited to the ones for 256 pixels. We will perform the experiment and will add the results in the final version if our paper is accepted. \n\nQ3. Computational overhead about VEM\n\nA. Please refer to our response to Q3 of Reviewer 6hcq.\n\n\nReference\n\n[A1] P. Isola et al., Image-to-Image Translation with Conditional Adversarial Networks, CVPR 2017.\n\n[A2] J.Y. Zhu et al., Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Network, ICCV2017.\n\n[A3] J. Johnson et al., Perceptual Losses for Real-Time Style Transfer and Super-Resolution, ECCV 2016.\n",
" We truly thank you for your constructive and positive comments and below are our responses to the main questions.\n\nQ1. Presentation issues related to clarification, figure 1, and definition of abbreviations\n\nA. We will carefully revise the paper to reflect your comments in the final version. For $q(t|s)$, it is impossible to check whether the variational distribution is tight or not because the true distribution is unknown. In figure 1, as you mentioned, the vertical arrows indicate the knowledge distillation employed in the previous methods. \n \nQ2. What is the interpretation of vertical arrows and backbone algorithms?\n\nA. In figure 1, as you mentioned, the vertical arrows indicate the knowledge distillation employed in the previous methods. For the knowledge distillation, the details are presented in Section A.1 of the supplementary material and please refer to it.\n\nQ3. The size of energy-based model and computational cost of VEM\n\nA. For the size of the energy-based model, it consumes 0.12G MACs for the Horse $\\rightarrow$ Zebra dataset while the model requires 1.95G MACs for other datasets. Also, our method requires an extra 0.5x ∼1.5x training cost depending on datasets and models. Although the presence of MCMC in VEM incurs an extra training cost, the cost can be reduced by decreasing the number of MCMC steps. When the number of MCMC steps is reduced from 10 (used in all experiments) to 5 combined with OMGD, the FID score on the Horse $\\rightarrow$ Zebra dataset is increased from 50.83 to 52.04, which still outperforms the baseline algorithm. Therefore, there is a trade-off between the performance and the training time and we will add the discussions in the final version. \n\nQ4. Is VEM applicable in absence of a distillation method?\n\nA. Yes, VEM is applicable when the previous methods are absent. But, we believe that VEM is more effective when combined with the methods\n",
" We truly thank you for your constructive and positive comments and below are our responses to the main questions.\n\nQ1. Moderate originality \n\nA. We partially agree with your opinion since we follow the footsteps of prior works for fair comparisons and show the effectiveness of the proposed method when combined with them. However, we would like to emphasize that our key contributions lie in 1) the information-theoretic problem formulation of GAN compression using mutual information and 2) the introduction of EBMs to variational distributions and its successful application to a practical problem. Such ideas have not been addressed before and are technically sound, so we believe that our paper has sufficient novelty.\n\nQ2. Large-scale experiments\n\nA. We really appreciate you for a good suggestion, however, please understand that it is very difficult to perform the suggested large-scale experiments during the short rebuttal period. \n\nQ3. Are there specific technical issues preventing the application of this method to more recent models or other large scale image modeling networks used in recent text-to-image generation?\n\nA. Since the outputs are also images in text-to-image generation tasks, our method is easily applicable to the networks without any modifications.\n\nQ4. Missing experiments for CRD and VID in Table 1, 2, and 3\n\nA. Since we believe that it is enough to show that the proposed method outperforms CRD and VID as presented in Table 4, we did not perform experiments for CRD and VID on all datasets and models. As suggested by Reviewer cQSG, we perform additional experiments except the Pix2Pix model using CRD due to the short rebuttal period while the results of the styleGAN2 experiments will be updated. Note that the results of Table A1 for the Horse $\\rightarrow$ Zebra dataset are copied from Table 4 of the main paper. As presented in table A1 and A2, VEM is consistently more effective than the previous method while VID and CRD sometimes perform worse than the baseline. We appreciate you for a good suggestion and will add the experiments in the final version if our paper is accepted. \n\n\nTable A1: Performance comparison with CRD and VID using CycleGAN (related to Table 2 of the main paper). \n\n| Dataset | Method | MACs | #Parameters | FID ($\\downarrow$) |\n|:----------------:|:---------------:|:---------------:|:---------------:|:---------------:|\n| Horse $\\rightarrow$ Zebra | OMGD* | 1.41G (40.3x) | 0.14M (82.5x) | 57.14 |\n| Horse $\\rightarrow$ Zebra | OMGD* + VID | 1.41G (40.3x) | 0.14M (82.5x) | 65.73 |\n| Horse $\\rightarrow$ Zebra | OMGD* + CRD | 1.41G (40.3x) | 0.14M (82.5x) | 70.59 |\n| Horse $\\rightarrow$ Zebra | OMGD* + VEM | 1.41G (40.3x) | 0.14M (82.5x) | 50.83 |\n| Summer $\\rightarrow$ Winter | OMGD* | 1.41G (40.3x) | 0.14M (82.5x) | 75.20 |\n| Summer $\\rightarrow$ Winter | OMGD* + VID | 1.41G (40.3x) | 0.14M (82.5x) | 74.39 |\n| Summer $\\rightarrow$ Winter | OMGD* + CRD | 1.41G (40.3x) | 0.14M (82.5x) | 74.54 |\n| Summer $\\rightarrow$ Winter | OMGD* + VEM | 1.41G (40.3x) | 0.14M (82.5x) | 74.04 |\n\nTable A2: Performance comparison with CRD and VID on the CelebA and FFHQ datasets using SAGAN and StyleGAN2, respectively (related to Table 3 of the main paper). \n\n| Dataset | Model | Method | MACs | Compression Ratio | FID ($\\downarrow$) |\n|:----------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|\n| CelebA | SAGAN | GCC* | 15.45M | 34.12%| 27.91 |\n| CelebA | SAGAN | GCC* + VID | 15.45M | 34.12% | 26.57 |\n| CelebA | SAGAN | GCC* + CRD | 15.45M | 34.12% | 30.93 |\n| CelebA | SAGAN | GCC* + VEM | 15.45M | 34.12% | 25.27 |\n| FFHQ | StyleGAN2 | CAGC* | 4.10B | 90.91%| 7.90 |\n| FFHQ | StyleGAN2 | GAGC* + VID | 4.10B | 90.91% | 7.51 |\n| FFHQ | StyleGAN2 | CAGC* + CRD | 4.10B | 90.91% | 7.65 |\n| FFHQ | StyleGAN2 | CAGC* + VEM | 4.10B | 90.91% | 7.48 |\n\nQ5. Discussion about limitations of the proposed method \n\nA. As you pointed out, our method increases the number of hyperparameters related to MCMC. Also, We thank you for your comment about the lack of an analysis about the bias problem for GAN compression approaches and the analysis seems to be a meaningful direction for the next research. We will add the limitations in the final version if our paper is accepted.\n",
" Information-Theoretic Generative Model Compression with Variational Energy-based Model focuses on compressing generative and structured prediction neural networks, using a combination of teacher student distillation (either online or offline) and a variational energy based model which is optimized by MCMC. Combining a flexible variational distribution rather than alternative approaches taken in prior work, this method shows improvement on several image to image and image generation network compression tasks when combined with existing methods in the literature.\n Strengths:\nThere are a wide swath of generative models and compression methods applicable to such networks, the authors have done a commendable job in ablations against other relevant methods, as well as detailing both the background / core methodology of their method and several others. Though the proposed method (in terms of implementation) is quite complex, the description as well as coupled code and detailed supplemental materials are sufficient to fully understand the proposed method. This clarity should make the paper accessible to the broader community.\n\nOverall, I find the paper to be good quality, with significant interest to the model compression community and (with potential application to alternate methods) interest to the generative community at large.\n\nWeaknesses:\nWeaknesses in terms of experiments largely come in the form of \"directional improvement\", specifically for large, high performance models. The experiments here show a marked improvement in the small-to-medium scale, but application to more recent models could improve the impact of this work in the broader community - the lack of inference overhead, a large reduction in parameter count, and preservation or improvement of performance (in terms of FID) on the benchmarks are all critical aspects to running large scale text-to-image generators on commodity hardware.\n\nOne minor weakness is originality, as the proposed method follows closely in the footsteps of prior work from a methodology perspective, and sees gains in combination with other existing methods (rather than via replacement). This not a critical problem, but on the specific criteria of originality the contribution here feels \"moderate\", compared to other areas discussed in \"Strengths\".\n\nA secondary weakness in the experiments is scale - more experiments on the scale of StyleGAN2 (or similar size models / image domains) would be potentially be beneficial. There are significant performance gains in terms of FID on the smaller scale experiments, but all methods (including this one) result in lower FID performance on the largest task.\n\nWhile the proposed method does lower the gap between compressed and uncompressed (while having a drastic reduction in parameter count) compared to relevant benchmarks, it is still counter to the trend seen in several of the other experiments overall.\n\nHaving experiments on other domains or models (bedrooms as a potential intermediary? Large scale non-face models?) could show whether this effect is particular to StyleGAN2 (being a quite high performance model), or a broader trend on small scale vs large scale image models. Varied effects on small scale (potential performance *improvement* where parameter reduction is unneeded) and large scale (massive parameter reduction while minimizing the performance gap) experiments is a useful contribution, that is currently not spelled out. Though asking for more experiments is difficult given the extensive studies the authors have already provided, even one more large-scale experiment could potentially illuminate some properties of the proposed method. This bridges into some of the questions in the following section. Are there specific technical issues preventing the application of this method to more recent (perhaps patch based, latent, or otherwise) models e.g. VQ-GAN, ViT-VQ-GAN, or other large scale image modeling networks used in recent text-to-image generation? If there are limitations in terms of assumptions, this would also be a good thing to state in a limitations-type section.\n\nSimilarly to the previous question, it seems like most (but not all) of the experiments compare with a best method + VID and/or CRD - particularly between Table 1 2 and 3, on the best performing methods sometimes CRD, VID, or both are missing. Is there a direct reason for this, beyond resources/time? \"Filling the grid\" so to speak, or explaining why VID/CRD is not applicable in some cases, would help follow-on researchers who may tackle one of these specific areas in greater detail. The limitations (listed in section A.4) are fairly minimal, and finding other limitations of the method compared to baselines (listing increased hyperparameters / tuning / MCMC related issues for example) would be beneficial. Given application to a face-related model (StyleGAN2) there is potential to analyze or focus on a broader study of where and how the compression modifies generations. There are relevant studies (for example \"Fairness for Image Generation with Uncertain Sensitive Attributes\", Jalal et. al. or \"Can Model Compression Improve NLP Fairness\", Xu and Hu in NLP) which can provide direction for looking at subsets of the overall data, to assess FID or other structural metrics to determine if one subset is disproportionately changed by the compression compared to the baseline. If this study is overly involved (to the level of needing its own study/paper), it is also appropriate to state as a limitation that the compression method has not been extensively tested with respect to increasing, reducing, or matching the bias of the original model, only analyzing characteristics such as parameter count and aggregate performance.",
" \nSampling generative networks on devices with limited computational capabilities imposes limits on complexity of these networks. The concept of distillation provides one approach to obtaining networks whose performance gracefully degrades with network complexity. \n\nPaper proposes a new distillation technique for generative adversarial networks. The method uses a tractable lower bound on mutual information between student and teacher generative networks. \n\nAuthors introduce a family of methods which optimize variational lower bound on the mutual information between teacher and student network. The optimizer alternates between updating variational parameters, student network’s parameters and optionally teacher network’s parameters. This approach combines well with existing network compression algorithms. The proposed method can be combined with existing compression algorithms. It improves performance measured by FID and mIoU for fixed number of parameters in a network compared to compression algorithms it augments. The paper is clear with few small corrections. Code is readily available. \n\n\nClarity: specify that p(t,s) = p(t|x) p(s|x)p(x) since mutual information stems from reuse of code between student and teacher.\nIt might be helpful to provide an example of perfect q(t|s) that keeps the bound tight -- p(x|s)p(t|x) -- and consists of decoder of students output and teacher's encoder.\n\nClarity:``student generator learns to maximize the variational lower bound'' might be more helpful rewritten as ``generator is trained by maximizing …'' The point here is that: Optimization algorithm optimizes, learning algorithm learns (if bug free and fed the right data), while the student generator ... generates samples. Crucially student generator does not learn how to maximize.\n\nVisuals in Figure 1 are a little difficult to understand. Arrows seem to have inconsistent meanings. Some arrows are simply meant as feeding forward outputs from one stage to another. Others are more abstract like the vertical arrows connecting teacher and student networks. Perhaps the intent was to refer to distillation methods embedded in the overall framework. However, the arrows themselves do not have a mathematical or algorithmic analog in the paper. This left me wondering if I missed some piece of the machinery. \n\nFID, MACs, mIoU and any other abbreviations should be defined and referenced.\n What is the interpretation of vertical arrows in Fig1?\n\nBackbone algorithms? Reference?\n\nWhat is the size of parameterization of the q(t|s) network for different tasks (architecture from supp inf Fig.4)? \nWhat is the computational cost of tacking VEM on top of other distillation/compression methods?\n\nIs VEM applicable in absence of a distillation method to piggy back on? Due to presence of MCMC steps and need to track auxiliary Langevin dynamics variables there is an increased space and computational cost. The degree to which this cost can be decreased (or increased) and how that impacts the performance of the method is not quite clear. ",
" This paper proposes a knowledge distillation approach to accelerate GANs via maximizing mutual information between the teacher model and the student model. While mutual information (MI) between two continuous distribution is intractable, the paper alternatively maximize the lower bound of MI. The method is evaluated on both conditional and unconditional GANs with a noticeable numeric improvement. Strengths:\n1. While VID has been proposed for classification model distillation, maximizing mutual information between teacher and student for GANs is more challenging and bears certain novelty.\n2. The motivation is clear and the technical details look correct.\n3. Experiments are carried out on multiple tasks and show good quantitative results.\n\nWeakness:\n1. My primary concern lies in the qualitative results. In fact, there is still not a good metric that really quantifies GAN's performance and thus the qualitative results are crucial to reflect the effectiveness of the approach. Looking at Fig. 2 of the paper, it's hard to tell whether OMGD or OMGD + VEM bears better quality. In addition, it seems that +VEM does not push the outputs from student to look similar to the teacher's. Why is that the case, as the mutual information is maximized during the distillation? It would be good to include figures or quantitative metrics (between teacher and student outputs, not FID) demonstrating the effectiveness of the distillation.\n2. The approach is only evaluated on GANs generating 256px images. However, prior method like CAGC has been applied for 1024px StyleGAN2. How would the method perform on high-resolution synthesis?\n3. The method uses an additional energy-based neural network and requires back-propagation to optimize its parameters, which is non-trival overhead. A computational time profiling on the additional module should be presented. See weakness. Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5
] | [
"nips_2022_sRKNkpUMQNr",
"-7e6HyzqrqB",
"beQPeAoXx4V",
"lQ2MRDmUMj",
"jYbekigQdQ_",
"ao_nSHKTOOp",
"rARv2VY3v4",
"GtrgH7agn_7",
"7Sb95teitJ0",
"J4a6pietHs",
"tZUEx2i0H3B",
"nips_2022_sRKNkpUMQNr",
"nips_2022_sRKNkpUMQNr",
"nips_2022_sRKNkpUMQNr"
] |
nips_2022_7vmyjUHgm9_ | Less-forgetting Multi-lingual Fine-tuning | Multi-lingual fine-tuning (MLF), which fine-tunes a multi-lingual language model (MLLM) with multiple source languages, aims to gain good zero-shot performance on target languages. In MLF, the fine-tuned model tends to fit the source languages while forgetting its cross-lingual knowledge obtained from the pre-training stage. This forgetting phenomenon degenerates the zero-shot performance of MLF, which remains under-explored. To fill this gap, this paper proposes a multi-lingual fine-tuning method, dubbed Less-forgetting Multi-lingual Fine-tuning (LF-MLF). In LF-MLF, we cast multi-lingual fine-tuning as a constrained optimization problem, where the optimization objective is to minimize forgetting, and constraints are reducing the fine-tuning loss. The proposed method has superior zero-shot performance; furthermore, it can achieve the Pareto stationarity. Extensive experiments on Named Entity Recognition, Question Answering and Natural Language Inference back up our theoretical analysis and validate the superiority of our proposals. | Accept | The paper proposes a method for finetuning multi-lingual pre-trained language in multiple languages simultaneously. The task is formalized as a constrained optimization problem and the upper bound of the forgetting is given in theory. A method is developed for multi-lingual fine-tuning to minimize the upper bound. Experiments are conducted in multiple downstream tasks and the model is fine-tuned in a few high-resource languages and the performance is improved in low resource languages as zero-shot settings. The authors responded the reviewers' concerns and the reviewers agree the responses addressed their concerns. The paper is recommended to be accepted, and I ask the authors to carefully prepare the final camera-ready version based on the reviewers' feedback. | val | [
"vP8693mzth",
"tdmq57icrm",
"Kf6cnxj3XIO",
"5ozI0HlUMq",
"MpLL2QKOiCZ",
"41F7TfCRLiE",
"3p2FBlfOyc-q",
"OVkTHmUrfTl",
"2P4UaMjJ7n5",
"vaJc0v6Nga_"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We highly appreciate your time for reading our revised paper. Your constructive and professional comments help a lot for improving this work. As to the limitations, we have discussed from two perspectives: (1) the assumption of being close to the original pretraining benefits the cross lingual generalization; (2) our theoretical analysis on the Pareto stationarity. We have added it in the current revised version (see Section A.1) as follows.\n\n1. Assumption. In Theorem 1, we assume that $L_p(\\theta_f)$ can be approximated by its second order Taylor expansion. This assumption is widely adopted [21] [22] [23] and can ensure proper performance over various deep neural network models and learning scenarios. With this assumption, we obtain an upper bound for the forgetting of multi-lingual fine-tuning, which demonstrates that being close to the original pretraining model can reduce forgetting, in other words, enhance cross-lingual generalization. Thus, if the assumption of second order Taylor approximation failed, being close to the original pretraining model/objective is not sufficient/desirable for downstream cross-lingual generalization. Note that relaxing this assumption remains an open problem, and if this assumption can be relaxed, our proposed methods can be used in more widely models and scenarios. \n\n2. Theory. In our theoretical analysis, we prove that our method can achieve a Pareto stationary point. Pareto stationarity is only a necessary condition for achieving Pareto optimality. Theoretical analysis of the gap between Pareto stationarity and Pareto optimality has not been studied in this work. Furthermore, in the setting of deep learning, the gap between Pareto stationarity and Pareto optimality remains less explored in both the area of multi-objective optimization and machine learning. In the future, we will explore this challenging problem.",
" Thank you for your reply, and we appreciate your time for reading our revised paper. Your comments help us a lot for improving our work. ",
" Thanks for addressing the issue and answering my questions. Now, the paper is clearer than before. And the method is well-motivated, and I can find the significance test in the revised version.",
" Thank you for addressing the listed weaknesses and questions. I read the revised parts of the paper and find them well addressed. A discussion of limitations remains missing nevertheless. ",
" We would like to thank you for your insightful and invaluable comments. Our paper has been carefully revised according to the comments, and the revised version has been submitted to OpenView. The revised parts are highlighted in blue in our manuscript. Below is our point-to-point response.\n\nQ1. It is not clear from the paper how their approach is different from the baseline methods....\n\nA1. Thanks for the comments and we appreciate your carefulness. We have re-written the related work part for Multi-task Learning, where we have discussed the difference between PCGrad [26], Gradient Vaccine [27] and our proposed LF-MLF (see Section 2, line 63-73). The revised related work part for Multi-task Learning is presented as follows. \n\n\"Multi-task Learning (MTL), which simultaneously learns multiple tasks, aims to achieve proper performance on all the including tasks. Recently, various MTL methods [14,15,16,17,18,19,20] have been proposed. Among them, PCGrad [19] and Gradient Vaccine [20] have achieved the state-of-the-art performance. Regarding the source languages of MLF as MTL tasks, MTL methods can be used in MLF. However, MTL methods just focus on the performance of including source languages (tasks), while they do not consider the performance of target languages. It will bring inferior zero-shot performance on the target languages. For example, PCGrad and Gradient Vaccine just focus on finding a proper common descent direction for the source languages, but they do not care about the target languages. Thus, they can not expect to achieve promising zero-shot performance. By contrast, the proposed LF-MLF finds the common descent direction that benefit the zero-shot performance....\"\n\n\nQ2. They have limited the usefulness of the methods to a very narrow...?\n\nA2. Thanks for the insightful comments. This paper focuses on the multi-lingual fine-tuning problem, which is an emerging research topic in the area of Multilingual Language Models. The multi-lingual fine-tuning problem remains less-explored. Especially, the forgetting problem in multi-lingual fine-tuning is still unexplored. Researching on the less-forgetting multi-lingual fine-tuning is meaningful. Therefore, this paper focus on this meaningful topic.\n\nWe highly agree your advice that our method has the potential to be used in a wide range of scenarios, e.g., transfer learning. It is an advantage of our method. In the future, we will adopt LF-MLF in other scenarios.\n\n\nQ3. It is probably incorrect to state line 75: \"MTL methods are not suitable for multi-lingual fine-tuning\" ....\n\nA3. Thanks very much for your advice. MTL methods only focus on the fine-tuning performance on the source languages, while they do not consider the zero-shot performance on the target languages. It makes them to have inferior zero-shot performance on the target languages. In the contrary, zero-shot performance cannot be ignored in multi-lingual fine-tuning (MLF). Therefore, although MTL methods can be used in MLF, they are not the very suitable. By contrast, LF-MLF focus on both the fine-tuning performance and zero-shot performance. Our experimental results verifies that LF-MLF has better zero-shot performance than the baseline MTL methods. \n\nWe appreciate your comments very much. Our statement \"MTL methods are not suitable for multi-lingual fine-tuning\" seems too strong. To present more accurate and clear, we have re-written the related work part for Multi-task Learning, where we have discussed the difference between MTL and LF-MLF. \n\nQ4. The authors do not discuss the size of labelled data for each of the multilingual cases or discuss how this affects the results....\n\nA4. Thanks for your insightful comments. In this paper, we adopt the XTREME benchmark and keep its original settings unchanged (includes the size of labelled data) to be consistent with the previous works in the area of Multilingual Language Models. Furthermore, our work focuses on the multi-lingual fine-tuning (MLF) setting, where the compared MLF methods (Uniform-MLF, PCGrad-MLF, GradVac-MLF and LF-MLF) adopt the labelled data from all the source languages. Although the source languages may have different size of labelled data, the MLF methods have identical size of labelled data for they use the labelled data from all the source languages. Thus, our comparisons between the MLF methods are fair.\n\nNevertheless, your advice is very valuable for us. Due to the time limitation of the rebuttal, we cannot conduct enough experiments to discuss the impacts bring by the he size of labelled data. However, in the later version, we will make our best effort to add sophisticated experimental analysis on the impacts bring by the size of labelled data.\n\nQ5. For Figure 3 and 4 please make sure that you specify if the x or y axis is the source or target.\n\nA5. Thanks for your valuable suggestions. In Figure 3 (a) and 4(a), the y axis is the source. We have specified it in our revised paper (please refer to the Section 5.4). ",
" We would like to thank you for your insightful and invaluable comments. Our paper has been carefully revised according to the comments, and the revised version has been submitted to OpenView. The revised parts are highlighted in blue in our manuscript. Below is our point-to-point response.\n\nQ1. The paper misrepresents multilingual fine-tuning.... \n\nA1. Thanks for the comments and we appreciate your carefulness. We have fixed the misrepresentations in our revised paper. Specifically, the misrepresentations are re-written as “Few works [8,9] have investigated the multi-lingual fine-tuning setting, but they do not consider the forgetting of MLLMs' cross-lingual generalization ability. To fill this gap, this paper focuses on the forgetting problem in multi-lingual fine-tuning”. Furthermore, we have re-organized and re-written the abstract and introduction in our revised paper. In the revised version, we have made our best to clearly claim our contributions. We are very grateful for your patience to read our revised paper. \n\nQ2. The abstract and introduction are not well representing...\n\nA2. Thanks for the insightful comments. We have re-organized and re-written the abstract and introduction in our revised paper. According to your advice, in the revised version, we have introduced and motivated the forgetting problem by adding more details. We are very grateful for your patience to read our revised paper.\n\nQ3. A comparison to all-language fine-tuning should be included as an upper bound, since training data is available for at least the NER task.\n\nA3. Thanks very much for your advice. We have added a comparison to the all-language fine-tuning (ALL-MLF). Due to the page limit, we have put the comparison in the Appendix section of our revised paper. The results can be found in the Table 8, 9, 10, 11, 12, 13 of our revised paper. Without any doubt, ALL-MLF has superior zero-shout and fine-tuning performance than LF-MLF and the baseline methods. ALL-MLF is the upper bound for MLF methods. We will make effort to approach the upper bound in the future.\n\nQ4. Significance tests should be performed...\n\nA4. Thanks for your advice. In the revision, we have reported the standard devisations of all results in Table 1, 2, 3, 4, 5, 6. Furthermore, for the zero-shot performance results (Table 1, 2, 3), the pairwise t-tests at 0.05 significance level are conducted. The t-tests demonstrates that LF-MLF is statistically superior to most of the baselines on each task. Besides, to comprehensively evaluate the superiority of LF-MLF, we further utilized Friedman test and Bonferroni-Dunn test to verify the superiority of LF-MLF. We can see that LF-MTF achieves highly superior results to other baseline methods (Please refer to the Section 5.2.1 line 208-222 of our revised paper).\n\nQ5. Please discuss [26] and [27] in Related Work as well...\n\nA5. Thanks very much for your advice. We have re-written the related work part for Multi-task Learning, where we have discussed the difference between PCGrad [26], Gradient Vaccine [27] and our proposed LF-MLF. We are very grateful for your patience to read our revised paper (see Section 2, line 63-73). \n\n\nQ6. How are mini-batches represented in the loss definition? And how are they composed...? \n\nA6. Thanks very much for your comments. In the loss definition, each language has a language-specific mini-batch. We sum losses over these language-specific mini-batches together and then conduct backward propagation. In the other word, each mini-batch is composed of only one language.\n\nQ7. How are the \"heads\" defined in Section 3.2? Why can one not use the pretraining head after having fine-tuned the rest of the parameters to get an estimate of L_p?\n\nA7. Thanks very much for your comments. In Section 3.2, the head represents the output layer for specific tasks. For example, a masked language modeling head represents the output layer for predicting the masked words, and a token classification head represents a token-level output layer for classification.\n\nWe do not mean “one can not use the pretraining head after having fine-tuned the rest of the parameters to get an estimate of L_p?” Instead, We mean that “one can not --directly-- use the pretraining head after having fine-tuned the rest of the parameters to get an estimate of L_p?” This is because, in the fine-tuning phase, if we want to estimate the L_p, we have to replace the down-stream task-specific output layer with the masked word prediction layer. Therefore, we argue that “one can not directly use the pretraining head after having fine-tuned the rest of the parameters to get an estimate of L_p”\n\n\nQ8. Figure 2 is not readable in black and white / grayscale.\n\nA8. Thanks very much for your comments. We have changed line style of Uniform-MLF to dash in our revised paper, which enable this figure to be readable in black and white / grayscale. ",
" We would like to thank you for your insightful and invaluable comments. Our paper has been carefully revised according to the comments, and the revised version has been submitted to OpenView. The revised parts are highlighted in blue in our manuscript. Below is our point-to-point response.\n\nQ1. The paper is not easy to follow. The paper method is unclear until I carefully read the paper several times. I would say the paper would be easier to follow if they put more details on the Algorithm and the introduction....\n\nA1. We thank the reviewer for the patience and valuable suggestions. We have carefully revisited the writing issues of our paper and try our best to improve the presentation. To present our proposed method more clear, we have revised our paper from the following two aspects.\n\n(1) To explain the motivation and mechanism of our method, we have re-organized and re-written the abstract and introduction of our paper. For example, in the revised abstract, we have explained our method from a broad perspective: “In LF-MLF, we cast multi-lingual fine-tuning as a constrained optimization problem, where the optimization objective is to minimize forgetting, and constraints are reducing the fine-tuning loss. ”. Correspondingly, in the revised introduction, we have concluded the mechanism of our method as: “This paper proposes to find a less-forgetting descent direction, which prevents a MLLM from forgetting the cross-lingual generalization ability and is a common descent direction for the source languages. To find this less-forgetting descent direction, we cast MLF as a constrained optimization problem. The optimization objective is to minimize the forgetting of a MLLM's cross-lingual generalization ability, and the constraint is that the direction should be a descent direction common to all the source languages.”.\n\n(2) We have added a toy example to illustrate the mechanism of our proposed LF-MLF method in Section 3.3 of our revised paper. In the toy example, we explain the mechanism and advantage of LF-MLF from a geometric view.\n\nQ2. The method is not well-motivated with a clear hypothesis. Therefore, I cannot find the rationale why the LF-MLF is useful in the abstract and introduction.\n\nA2. Thank you for your suggestions to improve the presentation of this paper. In response to advice, to better explain the rationale of LF-MLF’s effectiveness, we have (1) added more background knowledge and insights in the abstract and introduction of our revised paper; (2) we have explain the rationale of LF-MLF from a geometric view by giving a toy example in Section 3.3 of our revised paper.\n\n\nQ3. Did you run any significant tests or compute the std of all results (for all runs)? The scores for LF-MLF are similar to Uniform-MLF for some tasks and settings.\n\nA3. Excellent comments. In the revision, we have reported the standard devisions of all results in Table 1, 2, 3, 4, 5, 6. Furthermore, for the zero-shot performance results (Table 1, 2, 3). The pairwise t-tests at 0.05 significance level are conducted. The t-tests demonstrates that LF-MLF is statistically superior to most of the baselines on each task. \n\nBesides, to comprehensively evaluate the superiority of LF-MLF, we have further utilized Friedman test as the statistical test to analyze the relative performance among the compared methods across the three applications. At 0.05 significance level, the Friedman statistics is 5.5, and critical is 4.76. Thus, at 0.05 significance level, the null hypothesis of indistinguishable performance of LF-MTF among all compared methods is clearly rejected. Subsequently, we employ the Bonferroni-Dunn test as the post-hoc test by regarding LF-MLF as the control approach. Figure 2 (please refer to the Figure 2 in the revised version of our paper) reports the CD diagrams at 0.1 significance level, where the average ranks of the compared approaches is marked along the axis. From this figure, we can see that LF-MTF achieves highly superior results to other compared baseline methods.\n\nQ4. I am curious about the running time of the optimization. Does this method efficient in practice?\n\nA4. Thanks for your insightful comments. The running time of optimizing Problem 2 can be concluded as $T(T+1) C_1 + C_2(T)$, where $T$ is the number of the source languages, $C_1$ is the time of computing the inner product between gradients $\\nabla L(\\theta_{k-1})^{\\top} \\nabla L(\\theta_{k-1})$, and $C_2(T)$ is the time of solving a $T$-dimensional quadratic programming problem. The quadratic programming problem is solved by using the solvers.qp() function of cvxopt package. We test the running time on the platform which has a NVIDIA Tesla V100 32GB GPU and a Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz. By repeated testing 100 times, the average $C_1$ is 1.28 ms, and $C_2(T)$ is approximately $50+10*T$ ms. When $T =10$, the running time of optimizing Problem 2 is 378ms. It is efficient and affordable in the training phase.",
" The paper proposes an investigation of how to fine-tune a model with multilingual data effectively with a theoretical and empirical evidence. They proposed a method to avoid catastrophic forgetting by using interior-point solver to optimize the objective function based on the theoretical work. The authors claim that the method is able to reach the Pareto stationary -- no more changes occurred by optimizing multiple objectives by applying optimization. They show the superiority of the approaches in Named Entity Recognition, Question Answering and Natural Language Inference, and several languages.\n Strengths:\n- Interesting work with a good theoretical foundation\n- The task is very useful for learning multilingual models while optimizing the model to reach closer to the upper bound\n\nWeaknesses:\n- The paper is not easy to follow. The paper method is unclear until I carefully read the paper several times. I would say the paper would be easier to follow if they put more details on the Algorithm and the introduction. For example, what method does the paper use to optimize the weights, not just pointing to the equation. And, it would be great to add a short description of the method in the introduction so that the first readers can understand the proposed approach.\n- The method is not well-motivated with a clear hypothesis. Therefore, I cannot find the rationale why the LF-MLF is useful in the abstract and introduction.\n\nTypo:\n- In conclusion, Learning => learning - Did you run any significant tests or compute the std of all results (for all runs)? The scores for LF-MLF are similar to Uniform-MLF for some tasks and settings.\n- I am curious about the running time of the optimization. Does this method efficient in practice? There is no specific limitation section provided in the paper.",
" This paper proposes a method for maintaining multilinguality under fine-tuning when a set of languages is used for fine-tuning. The core idea is to maintain crosslingual generalization by finding updates that are deviating not too much from the original pretrained model and its loss, and those that many languages benefit from. The proposed loss is and its optimization are theoretically motivated and defined, and then evaluated on downstream tasks of the XTREME benchmark with XLM RoBERTa, where a few high-resource languages are chosen for fine-tuning, and the remaining languages are used for zero-shot evaluation. \n\n[NOTE: presentation and overall score updated after author rebuttal] Strengths:\n1. The problem setting of zero-shot generalization *after* fine-tuning is novel and offers impactful applications.\n2. The experimental results are largely convincing.\n3. The detailed impact studies (Section 5.3 and following) are insightful and thorough.\n\nWeaknesses: [NOTE: all well addressed in revised version]\n1. The paper misrepresents multilingual fine-tuning (\"multilingual fine-tuning, which has not been explored before\" l.34; \"the fine-tuning scenario where multiple source languages are involved in fine-tuning, namely multilingual fine-tuning, remains unexplored.\" l.59). It has been explored before, most prominently in the paper for the XTREME benchmark (https://arxiv.org/abs/2003.11080) that this very paper is also evaluating on. \"translate-train\" and \"in-language multi-task\" (from XTREME) yielded gains over monolingual fine-tuning, and should be discussed, in particular in relation to the Uniform MLF baseline presented in this work. Another work that builds on multilingual fine-tuning is for example https://arxiv.org/abs/2205.02022 (NAACL 2022). What hasn't been studied in this setting afaik is the problem of forgetting for some languages, fine-tuning was either done on all languages of interest or only on English.\n2. The abstract and introduction are not well representing the focus of the remaining sections, as it does not introduce and motivate the forgetting problem, that the proposed method is mainly developed for and the title is derived from. \n3. A comparison to all-language fine-tuning should be included as an upper bound, since training data is available for at least the NER task. \n4. Significance tests should be performed, since some differences are sometimes small (and averages across multiple languages) and might not be strong enough to conclude superiority of one model over the other. At least standard deviation could be reported as well (since it's 3 runs). [NOTE: all well addressed in revised version]\n1. Please discuss [26] and [27] in Related Work as well, as baselines are based on them, and the differences should be discussed.\n2. How are mini-batches represented in the loss definition? And how are they composed (mixed across languages or only one language? This might make a different for optimization. \n3. How are the \"heads\" defined in Section 3.2? Why can one not use the pretraining head after having fine-tuned the rest of the parameters to get an estimate of L_p?\n4. Figure 2 is not readable in black and white / grayscale.\n\n Limitations are not addressed, except for that in some scenarios the proposed method does not outperform all other methods. Please discuss in which scenarios the method might not work, or where the assumption that being close to the original pretraining model/objective is not sufficient/desirable for downstream crosslingual generalization.",
" This paper presents a way of learning from multilingual annotated data. They focus on the zero-shot transfer to low-resource languages for tasks like NER, QA and NLI. The authors claim that for this setting they need to optimize the model for two new objectives: one that minimizes the forgetting of the low-resource languages from pretraining, and one that ensures that when fine-tuning on the different multi-lingual annotated datasets that the descent direction between them is shared or is in common. They formulate this as a constrained optimization problem and call their method Less-forgetting Multi-lingual Fine-tuning (LF-MLF). They show consistent but not very large improved performance on NER, QA and NLI and look at where labelled data from more languages helps - and unsurprisingly it does. \n Strengths \n\nThey show consistent improvements averaged over up to 45 zero-shot language directions over three different tasks which is quite extensive. \n\nWeaknesses\n\nIt is not clear from the paper how their approach is different from the baseline methods. The authors should explain the most relevant baselines and describe how they are different: Project Conflicting Gradients method and Gradient Vaccine. This is a major flaw in the paper. \n\nThey have limited the usefulness of the methods to a very narrow, specific use-case: multilingual and zero-shot. The less-forgetting direction could in theory be useful for monolingual fine-tuning as well as multilingual fine-tuning. The combined descent could be useful for transfer learning with labelled data too - not just zero-shot. Why did the authors not mention this or do any experiments along these lines? \n\nIt is probably incorrect to state line 75: \"MTL methods are not suitable for multi-lingual fine-tuning\" as the authors use such methods as baselines in the paper, even though they do not describe them and how they are different to the LF-MLF. \n \nThe authors do not discuss the size of labelled data for each of the multilingual cases or discuss how this affects the results. Eg. in Table 2 en, de, fr, ru etc. as performance on fine-tuning just on Russian gives better performance that just training on English which does not make sense. \n\nThe amount of improvement is not great. The simple uniform multilingual fine tuning method is the second best method and LF-MLF is ahead of it by only 0.5-1 point. \n For Figure 3 and 4 please make sure that you specify if the x or y axis is the source or target.\n This work does not discuss its limitations which is a limitation. \n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"5ozI0HlUMq",
"Kf6cnxj3XIO",
"3p2FBlfOyc-q",
"41F7TfCRLiE",
"vaJc0v6Nga_",
"2P4UaMjJ7n5",
"OVkTHmUrfTl",
"nips_2022_7vmyjUHgm9_",
"nips_2022_7vmyjUHgm9_",
"nips_2022_7vmyjUHgm9_"
] |
nips_2022_c3HrNgQE7d | Exploring Figure-Ground Assignment Mechanism in Perceptual Organization | Perceptual organization is a challenging visual task that aims to perceive and group the individual visual element so that it is easy to understand the meaning of the scene as a whole. Most recent methods building upon advanced Convolutional Neural Network (CNN) come from learning discriminative representation and modeling context hierarchically. However, when the visual appearance difference between foreground and background is obscure, the performance of existing methods degrades significantly due to the visual ambiguity in the discrimination process. In this paper, we argue that the figure-ground assignment mechanism, which conforms to human vision cognitive theory, can be explored to empower CNN to achieve a robust perceptual organization despite visual ambiguity. Specifically, we present a novel Figure-Ground-Aided (FGA) module to learn the configural statistics of the visual scene and leverage it for the reduction of visual ambiguity. Particularly, we demonstrate the benefit of using stronger supervisory signals by teaching (FGA) module to perceive configural cues, \ie, convexity and lower region, that human deem important for the perceptual organization. Furthermore, an Interactive Enhancement Module (IEM) is devised to leverage such configural priors to assist representation learning, thereby achieving robust perception organization with complex visual ambiguities. In addition, a well-founded visual segregation test is designed to validate the capability of the proposed FGA mechanism explicitly. Comprehensive evaluation results demonstrate our proposed FGA mechanism can effectively enhance the capability of perception organization on various baseline models. Nevertheless, the model augmented via our proposed FGA mechanism also outperforms state-of-the-art approaches on four challenging real-world applications. | Accept | Overall, the reviewers commend the motivation of the approach, the core ideas presented in the paper, and the extensive experiments conducted for four different applications including camouflaged and salient object detection, infection, and polyp segmentation.
In response to Reviewer fHq8, the authors have mentioned updated results with hyper-parameter tuning, however, they don’t mention which set is used for this purpose. Details on whether the validation set is used or not and how it is chosen are important for the final version.
In response to Reviewer ghhU, authors have reported new experiments and comparisons, alongside clarifications on motivation and justification for the choices made as part of the approach.
It appears that the major concerns from reviewers have been addressed in the response and the paper can be accepted after the rebuttal. Authors are suggested to include all the suggested changes in the final version. | train | [
"5Tpm7HjYrF",
"BhFkAzAi2lh",
"lTvRmJFVJlQ",
"2x2dLIW3aFH",
"LcqzRA4sm6g",
"HnduKggnUOL",
"w3BkHgY400s",
"MC39_CEdxtl",
"V995MCUhRRB",
"kEsKXMXJbS4",
"eH2nPk8WntV",
"shp0GNjPmD",
"hyNGpKKinTr",
"IgdyH_89gVq",
"7hfkIet67f"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your new comments, we clarify your concern below.\n\n\n**Q#:** Regarding the selection of two cues, I understand that the selected cues are of importance. However, how about other cues mentioned in the related work section since they are also \"factors that affect Figure-Ground assignment\".\n\n**A#:** Our work aims to explore the F-G assignment mechanism to empower CNN. A major part of our work is the introduction of FG cues, and we have explained the motivation for choosing two cues in detail in the introduction and in the previous round of responses, based on which we have conducted a follow-up study. We believe that using more valid FG cues can further contribute to developing this direction, which will be an essential part of our future work.\n\n\n**Q#:** For the new results, could you point out which parameters are critical for the performance. Even for these new results, I find that the improvements for some datasets (e.g., CHA, COD) are not significant. While it is not essential to obtain highly superior results, some insights on the results should be provided.\n\n**A#:** In our experiments we found that the learning rate is important for performance. By increasing the learning of learnable parameters in FGA module to 1.1 times of the original (the learning rates of Encoder and Decoder are unchanged), the performance can be further improved. \n\nIn the challenging COD task, each camouflaged image is accompanied by different super-classes (Amphibian, Aquatic, Flying, and Territorial) that reflect common challenges in real-world scenes. These annotations are beneficial for investigating the pros and cons of camouflaged object detection methods. To illustrate the insight of the new results, here we report the results on the Aquatic super-class. This super-class was chosen because most of the images in this superclass contain indefinable boundaries. The result on this superclass set show that the improvement of our method is significant, which further illustrates the effectiveness of FGA.\n\nThe result on Aquatic (474 images) super-class in COD dataset. \n| | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [10] | 0.811 | 0.883 | 0.696 | 0.051 |\n| [67] | 0.815 | 0.873 | 0.687 | 0.049 |\n| Our | 0.817 | 0.890 | 0.704 | 0.045 |\n| Our* | 0.823 | 0.899 | 0.710 | 0.042 |\n\n[10] D.-P. Fan, G.-P. Ji, M.-M. Cheng, and L. Shao. Concealed object detection. T-PAMI, 2021.\n[67] F. Yang, Q. Zhai, X. Li, R. Huang, A. Luo, H. Cheng, and D.-P. Fan. Uncertainty-guided transformer reasoning for camouflaged object detection. CVPR, 2021.\n\n\n**Q#:** I am curious about the implementation of HED loss. Did you apply deep supervision, which is known important to the performance?\n\n**A#:** We directly replaced L_LR with HED loss in the implementation process and did not use the deep supervision strategy. Note that our L_LR also does not use the deep supervision strategy in the implementation process, so we believe the previous experiment is fair. To alleviate your concern, we have also added the experimental results of adding the deep supervision strategy. We also include the deep supervision strategy in the L_LR part for a fair comparison. As we can see from the experimental results, the performance of the FGA (HED) model with deep supervision is further improved and even slightly surpasses that of the original FGA model. However, the performance is still inadequate compared to the FGA model with deep supervision.\n\n| | FGA | FGA +Deep | FGA (HED) | FGA (HED) +Deep |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| S_a | 0.816 | 0.832 | 0.807 | 0.815 |\n| IoU | 0.659 | 0.671 | 0.642 | 0.663 |\n\n\n**Q#:** The motivation to use Lambda instead of self-attention is not discussed.\n\n**A#:** Lambda layers generalize and extend self-attention formulations to capture both content-based and position-based interactions in global or local, which is crucial for modeling highly structured inputs such as images. Moreover, the modules built on the Lambda strategy are computationally efficient, model long-range dependencies at a small memory cost, and can therefore be well suited for dense prediction tasks. We will incorporate this discussion into the next version.",
" Thanks for the detailed response, which has addressed some of my concerns. Now I have some new comments.\n\n1) Regarding the selection of two cues, I understand that the selected cues are of importance. However, how about other cues mentioned in the related work section since they are also \"factors that affect Figure-Ground assignment\". \n2) For the new results, could you point out which parameters are critical for the performance. Even for these new results, I find that the improvements for some datasets (e.g., CHA, COD) are not significant. While it is not essential to obtain highly superior results, some insights on the results should be provided.\n3) I am curious about the implementation of HED loss. Did you apply deep supervision, which is known important to the performance? \n4) The motivation to use Lambda instead of self-attention is not discussed. \n5) Finally, the writing of the methodology part is not good (as I mentioned in the first-round comments). While some revision has been made, I did not check very carefully (which will take some time) to confirm its correctness. ",
" Dear Reviewer ghhU,\n\nWe have addressed your major concerns regarding our paper with additional experimental results. We are happy to discuss them with you in the openreview system if you feel that there still are some concerns/questions. We also welcome new suggestions/comments from you!\n\nBest regards,",
" **Q#:** The statement \"figure-ground assignment ... contributes almost all perception-based tasks\" seems an overclaim, or more discussions should be provided to support the argument.\n\n**A#:** Thanks to your suggestion, we have revised this sentence. Please refer to our rebuttal revision. \n\nThe critical process of the perceptual organization known as figure-ground assignment, first proposed by Edgar Rubin [ref1], involves giving one of the two adjacent regions a boundary. Over the years, many scholars have confirmed its existence and discovered its mechanism. The figure-Ground assignment is commonly thought to follow region segmentation, and it is an essential step in forming a perception of surfaces, shapes, and objects [ref2][ref3]. \n\n[ref1] Rubin, E.: Visuell wahrgenommene figuren. In: Kobenhaven: Glydenalske boghandel. (1921)\n\n[ref2] J. Wagemans, J. H. Elder, M. Kubovy, S. E. Palmer, M. A. Peterson, M. Singh, and R. von der Heydt. A century of gestalt psychology in visual perception: I. perceptual grouping and figure–ground organization. Psychological bulletin, 2012.\n\n[ref3] J. Wagemans, J. Feldman, S. Gepshtein, R. Kimchi, J. R. Pomerantz, P. A. Van der Helm, and C. Van Leeuwen. A century of gestalt psychology in visual perception: Ii. conceptual and theoretical foundations. Psychological bulletin, 138(6):1218, 2012.\n\n\n**Q#:** How are K_C and K_L determined? Do they set to the same values across the datasets? How will the different values affect the performance?\n\n**A#:** The structure element size (K_C and K_L) mentioned in the paper is an intuitive choice. They are set to the same values across the datasets. Table 7-10 show that our proposed method achieves competitive results without hyper-parameter tuning. Furthermore, the physical meaning of K_C and K_L, which control the granularity of the Figure-Ground cues and their coverage areas, is easy to understand. Therefore, we believe carefully selecting this hyperparameter will improve performance.\n\n\n**Q#:** Why is Labmbda used here?\n\n**A#:** As stated in the caption of Figure 3, we use Lambda strategy, which has been proven to be an effective method, to improve the computation efficiency of the CLI and GI interaction.\n\n\n**Q#:** Typos and other errors.\n\n**A#:** Thanks for your detailed comments. We have addressed each of these issues. Please refer to our rebuttal revision.",
" **Q#:** The performance on application tasks.\n\n**A#:** Our work aims to explore the F-G assignment mechanism to empower CNN to achieve a robust boundary assignment despite visual ambiguity. And we present a novel Figure-Ground-Aided (FGA) module to learn the configural statistics of the visual scene and leverage it for the reduction of visual ambiguity. To clearly discuss its effectiveness, a cognitive visual testing--Figure-Ground Segregation is presented in Section 3. Similar to the control variable method in physical experiments, Figure-Ground Segregation can reliably evaluate the capability of object segmentation models in boundary assignment, excluding irrelevant factors and some short-cut cues (e.g., context information). The comprehensive experiments adequately demonstrate that our proposed FGA module can facilitate the CNN to learn more efficiently in the reduction of visual ambiguities with low data requirements.\n\nFurthermore, the application tasks serve as validation of the potential of our approach. Experimental results show that we achieve superior results in most metrics for all four applications (COD, PS, LIS, and SOD), and we believe that careful tuning of the parameters will result in better performance. To demonstrate the potential of our method, we have optimized some hyperparameters (learning rate and decay strategy) based on the original model, and the experiments show that the performance of our method can be further improved. Where \"Our*\" represents the performance after optimizing the hyperparameters.\n\nThe results on CHA Dataset.\n| | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [10] | 0.888 | 0.942 | 0.816 | 0.030 |\n| [67] | 0.888 | 0.945 | 0.796 | 0.031 |\n| Our | 0.898 | 0.945 | 0.839 | 0.032 |\n| Our* | 0.902 | 0.949 | 0.840 | 0.031 |\n\nThe results on CAM Dataset.\n| | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [10] | 0.820 | 0.882 | 0.743 | 0.070 |\n| [67] | 0.785 | 0.859 | 0.686 | 0.086 |\n| Our | 0.793 | 0.858 | 0.745 | 0.078 |\n| Our* | 0.803 | 0.871 | 0.748 | 0.068 |\n\nThe results on COD Dataset.\n| | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [10] | 0.815 | 0.887 | 0.680 | 0.037 |\n| [67] | 0.818 | 0.850 | 0.667 | 0.035 |\n| Our | 0.818 | 0.888 | 0.683 | 0.034 |\n| Our* | 0.821 | 0.895 | 0.687 | 0.031 |\n\nThe results on COVID-19 Dataset.\n| | Dice | Sen | Spec | S | E | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [13] | 0.682 | 0.692 | 0.943 | 0.781 | 0.720 | 0.082 |\n| [20] | 0.700 | 0.751 | - | - | 0.860 | 0.084 |\n| Our | 0.735 | 0.720 | 0.965 | 0.792 | 0.900 | 0.062 |\n| Our* | 0.754 | 0.748 | 0.973 | 0.799 | 0.911 | 0.056 |\n\n[10] D.-P. Fan, G.-P. Ji, M.-M. Cheng, and L. Shao. Concealed object detection. T-PAMI, 2021.\n\n[13] D.-P. Fan, T. Zhou, G.-P. Ji, Y. Zhou, G. Chen, H. Fu, J. Shen, and L. Shao. Inf-net: Automatic covid-19 lung infection segmentation from ct images. T-MI, 2020.\n\n[20] G.-P. Ji, L. Zhu, M. Zhuge, and K. Fu. Fast camouflaged object detection via edge-based reversible re-calibration network. PR, 2021.\n\n[67] F. Yang, Q. Zhai, X. Li, R. Huang, A. Luo, H. Cheng, and D.-P. Fan. Uncertainty-guided transformer reasoning for camouflaged object detection. CVPR, 2021.\n\n\n**Q#:** I am particularly concerning whether the lower-region cue is well connected to the challenge studied in the work. More discussions are demanded. In addition, I am wondering how will the model perform by directly using a edge-aware loss like HED instead of L_LR.\n\n**A#:** Following your comments, we performed an experiment on the normal-level dataset of the Figure-Ground Segregation Test, and the experimental result shows that directly using an edge-aware loss like HED instead of L_LR does not achieve better results than the FGA model itself. The reason is that the supervision using edge-aware only provides the perception of the foreground boundary, while the Lower region cue provides a more contextual perception of the boundary region beyond the foreground boundary. In brief, it allows the model to focus on the associations within the inner and outer regions of the boundary and the differences between the inner and outer regions.\n\n| | FGA | FGA (HED) |\n|:--------:|:--------:|:--------:|\n| S_a | 0.816 | 0.807 |\n| IoU | 0.659 | 0.642 |",
" **Q#:** The core idea and motivation of our paper.\n\n**A#:** Before answering other questions, we would like to clarify this paper's core idea and motivation. Our work aims to explore the F-G assignment mechanism, which conforms to human vision cognitive theory, to empower CNN to achieve a robust perceptual organization despite visual ambiguity. Existing approaches built on CNNs contain two main components: encoders and contextual modeling. The encoding part typically utilizes a feature extractor (e.g., Resnet) pre-trained on a large-scale classification task (ImageNet). The advantage of doing so is using a large-scale classification task to drive the model to learn associations between pixels and category labels, which can extract invariant representations beyond the pixel level. Nevertheless, such invariant representations lack spatial contextual awareness, which is not friendly for pixel-level prediction tasks like image segmentation. To address this problem, a series of works have tried to complement it by adding or enhancing the capabilities of contextual modeling (multi-scale, multi-receptive-field, long-range modeling, Etc.). However, the visual ambiguities impede CNN's represent learning and contextual modeling, leading to inaccurate and incomplete perceptual organization. Since the visual appearance differences between the foreground and background are obscure, it is difficult to perceive the correlation between individual visual elements and determine the boundaries.\n\nThe Figure-Ground assignment principle in human cognition is thought to be important in reducing the visual ambiguousness of a scene from two aspects: 1) as stated in Line 68-71, \"neural activity associated with the Figure-Ground assignment mechanism in the V2 cortex of human vision occurs as early as 10-25 ms after the generation of visual stimuli, providing strong support for the role of local bottom-up processing\". Equally important, 2) as stated in Line 71-73, \"the 'meaningfulness principle' also showed that assigned figures tend to be associated with neighborhoods with familiar shapes, pointing to the integration of knowledge from the top-down\".\n\nInspired by these studies, we correspondingly propose our insights: 1) we perform a pre-attentive selection of supervisory signals (figure-ground cues) and filter out the knowledge that is valuable for context modeling as support (stronger supervisory signals) for the following perception. 2) we introduce a progressive interaction enhancement mechanism (implemented by several IEMs) to support the boundary assignment process. \n\n**Q#:** The figure-ground cues selection.\n\n**A#:** We carefully investigate the configural cues related to the Figure-Ground assignment mechanism in human psychophysics and find that the figural region usually takes on the shape instructed by the separating boundary and appears closer to the viewer. In contrast, the ground region is seen as extending behind the figure. Therefore, we selected cues (Convexity and Lower Region) directly related to the implementation of the figure-ground assignment mechanism in our work. As shown in Line 81, \"Convexity cue corresponds to the regions on either side of the boundary where the scene depth changes abruptly and is beneficial for analyzing the hierarchical relationship between neighboring regions in the image, facilitating hierarchical contextual modeling.\" As shown in Line 84, \"Lower region cue usually corresponds to the region of the scene where occlusion has occurred and is beneficial for analyzing the occlusion relationship between various neighborhoods in an image and determining the shape attribution of foreground objects and background regions.\"\n",
" **Q#:** The superiority of the proposed model is not very significant\n\n**A#:** Our work aims to explore the F-G assignment mechanism to empower CNN to achieve a robust boundary assignment despite visual ambiguity. And we present a novel Figure-Ground-Aided (FGA) module to learn the configural statistics of the visual scene and leverage it for the reduction of visual ambiguity. To clearly discuss its effectiveness, a cognitive visual testing--Figure-Ground Segregation is presented in Section 3. Similar to the control variable method in physical experiments, Figure-Ground Segregation can reliably evaluate the capability of object segmentation models in boundary assignment, excluding irrelevant factors and some short-cut cues (e.g., context information). The comprehensive experiments adequately demonstrate that our proposed FGA module can facilitate the CNN to learn more efficiently in the reduction of visual ambiguities with low data requirements.\n\nFurthermore, the application tasks serve as validation of the potential of our approach. Experimental results show that we achieve superior results in most metrics for all four applications (COD, PS, LIS, and SOD), and we believe that careful tuning of the parameters will result in better performance. To demonstrate the potential of our method, we have optimized some hyperparameters (learning rate and decay strategy) based on the original model, and the experiments show that the performance of our method can be further improved. Where \"Our*\" represents the performance after optimizing the hyperparameters.\n\nThe results on CHA Dataset.\n| | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [10] | 0.888 | 0.942 | 0.816 | 0.030 |\n| [67] | 0.888 | 0.945 | 0.796 | 0.031 |\n| Our | 0.898 | 0.945 | 0.839 | 0.032 |\n| Our* | 0.902 | 0.949 | 0.840 | 0.031 |\n\nThe results on CAM Dataset.\n| | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [10] | 0.820 | 0.882 | 0.743 | 0.070 |\n| [67] | 0.785 | 0.859 | 0.686 | 0.086 |\n| Our | 0.793 | 0.858 | 0.745 | 0.078 |\n| Our* | 0.803 | 0.871 | 0.748 | 0.068 |\n\nThe results on COD Dataset.\n| | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [10] | 0.815 | 0.887 | 0.680 | 0.037 |\n| [67] | 0.818 | 0.850 | 0.667 | 0.035 |\n| Our | 0.818 | 0.888 | 0.683 | 0.034 |\n| Our* | 0.821 | 0.895 | 0.687 | 0.031 |\n\nThe results on COVID-19 Dataset.\n| | Dice | Sen | Spec | S | E | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [13] | 0.682 | 0.692 | 0.943 | 0.781 | 0.720 | 0.082 |\n| [20] | 0.700 | 0.751 | - | - | 0.860 | 0.084 |\n| Our | 0.735 | 0.720 | 0.965 | 0.792 | 0.900 | 0.062 |\n| Our* | 0.754 | 0.748 | 0.973 | 0.799 | 0.911 | 0.056 |\n\n[10] D.-P. Fan, G.-P. Ji, M.-M. Cheng, and L. Shao. Concealed object detection. T-PAMI, 2021.\n\n[13] D.-P. Fan, T. Zhou, G.-P. Ji, Y. Zhou, G. Chen, H. Fu, J. Shen, and L. Shao. Inf-net: Automatic covid-19 lung infection segmentation from ct images. T-MI, 2020.\n\n[20] G.-P. Ji, L. Zhu, M. Zhuge, and K. Fu. Fast camouflaged object detection via edge-based reversible re-calibration network. PR, 2021.\n\n[67] F. Yang, Q. Zhai, X. Li, R. Huang, A. Luo, H. Cheng, and D.-P. Fan. Uncertainty-guided transformer reasoning for camouflaged object detection. CVPR, 2021.\n\n\n**Q#:** Some necessary implementation details are missing.\n\n**A#:** (1) The intermediate features E^Con_4, ..., E^Con_1 and E^LR_4, ..., E^LR_1 are derived from the feature E_5. The calculation process here is the same as in the Decoder part. (2) The structure element size mentioned in the paper is an intuitive choice. As shown in Section 4, our proposed method achieves competitive results without hyper-parameter tuning. We believe that a careful selection of this hyperparameter will improve performance. (3) We have presented the relevant details in supplementary materials to facilitate a better understanding of the reader (Figure S6 in the supplementary material).\n\n\n**Q#:** There are some mistakes in the notations and expressions.\n\n**A#:** (1): We are sorry to miswrite \"L_D\" as \"L_M\". (2): The four tasks here include, in addition to the previously mentioned COD, PS, and LIS, the SOD task mentioned at the end of the experimental section. (5): We are sorry to miswrite. The results of [66] on PASCAL-S dataset is 0.062, 0.825, 0.862, and 0.901. (3)(4)(6)(7)(8): Thank you for the reminder. We have checked and fixed these issues as suggested. Please refer to our rebuttal revision.",
" We thank the reviewer for taking the time to read our responses and provide positive assessments and additional concerns. We wish to have the opportunity to address the concerns regarding empirical evaluations.\n\n**Q#:** Why authors design the experiments on the synthetic dataset? If the realistic datasets may include some other irrelevant factors for Figure-Ground Segregation, authors should explain this problem explicitly.\n\n**A#:** As a cognitive visual testing, Figure-Ground Segregation is presented in Section 3. Similar to the control variable method in physical experiments, Figure-Ground Segregation can reliably evaluate the capability of object segmentation models in boundary assignment, excluding irrelevant factors and some short-cut cues (e.g., context information). Specifically, referring to the cognitive science experiments which exclude irrelevant factors from the test conditions, such as excluding other irrelevant factors for Figure-Ground Segregation, we also need to exclude the influence of several factors, i.e. 1) excluding the influence of contextual information brought by semantic labels since utilizing semantic labels of different objects helps to learn discriminative feature representation; 2) excluding the influence of spatial contexts provided by multiple instances belonging to the same class since they contribute to learning robust (e.g., scale-invariant or occlusion-aware) feature representation; and 3) excluding segmentation cues due to significant appearance differences between foreground and background since their distinct textures, colors, and illumination help to learn cheap features to distinguish them.\n\nFrom the perspective of computer vision, the competition of this test is some kind of weak. However, this is precisely the characteristic of cognitive testing. Standard computer vision datasets make it difficult to pinpoint the relative contributions of different visual strategies since the performance of architecture may be affected by several factors, including dataset biases, model hyper-parameters, and the number of samples [ref1][ref2][ref3][ref4]. \n\n[ref1] D. Linsley, J. Kim, V. Veerabadran, and T. Serre. Learning long-range spatial dependencies with horizontal gated-recurrent units. NeurIPS, 2018.\n\n[ref2] J. Kim, D. Linsley, K. Thakkar, and T. Serre. Disentangling neural mechanisms for perceptual grouping. ICLR, 2020.\n\n[ref3] Linsley, D., Malik, G., Kim, J., Govindarajan, L.N., Mingolla, E. and Serre, T.. Tracking Without Re-recognition in Humans and Machines. NeurIPS, 2021.\n\n[ref4] Vaishnav, M., Cadene, R., Alamia, A., Linsley, D., VanRullen, R. and Serre, T., 2022. Understanding the computational demands underlying visual reasoning. Neural Computation.\n\n\n**Q#:** The ablation experiments on the realistic datasets are also necessary.\n\n**A#:** Following the reviewer’s advice, we added the relevant ablation experiments on the realistic dataset, as shown in the following table, and the performance is generally consistent with that in the Figure-Ground Segregation Test. Thanks for your suggestion, and due to space limitations in the main paper, we have included these ablation results in the supplementary material (Table S6 and Table S7).\n\nAblation study of the proposed FGA-Net on COD dataset.\n| | LR | Covx | IEM | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\n| Baseline | | | | 0.795 | 0.878 | 0.649 | 0.063 |\n| (a) | $\\surd$ | | | 0.808 | 0.871 | 0.669 | 0.040 |\n| (b) | | $\\surd$ | | 0.803 | 0.866 | 0.654 | 0.039 |\n| (c) | $\\surd$ | $\\surd$ | | 0.813 | 0.880 | 0.683 | 0.035 |\n| FGA | $\\surd$ | $\\surd$ | $\\surd$ | 0.821 | 0.895 | 0.687 | 0.031 |\n\nAblation study of IEM on COD dataset.\n| Local | Collaborative | Global | Lambda | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\n| $\\surd$ | | | | 0.815 | 0.884 | 0.680 | 0.037 |\n| $\\surd$ | $\\surd$ | | | 0.816 | 0.889 | 0.690 | 0.036 |\n| | | $\\surd$ | | 0.813 | 0.880 | 0.681 | 0.037 |\n| $\\surd$ | $\\surd$ | $\\surd$ | | 0.818 | 0.890 | 0.686 | 0.033 |\n| $\\surd$ | $\\surd$ | $\\surd$ | $\\surd$ | 0.821 | 0.895 | 0.687 | 0.031 |\n",
" Thank you for your feedback. We hope you will agree that we have addressed your main concerns, and that the paper is far more readable because of it.\n\n**Q#:** I suggest improving the structure (introduction->method->experiments), and put more focus on the IEM in Fig 3, which is in my view the main figure in this paper. Also, to improve the visualization of the Fig 7, and Fig. 10.\n\n**A#:** Thanks to your suggestion, we have modified the structure of our paper. Please refer to our rebuttal revision. \n\nWe need to clarify that our work aims to explore the F-G assignment mechanism, which conforms to human vision cognitive theory, to empower CNN to achieve a robust perceptual organization despite visual ambiguity. Specifically, we present a novel Figure-Ground-Aided (FGA) framework to learn the configural statistics of the visual scene and leverage it for the reduction of visual ambiguity. Two main aspects are involved in our method: 1) We perform a pre-attentive selection of supervisory signals (figure-ground cues), and filter out the knowledge that is valuable for context modeling as support (stronger supervisory signals) for the following perception; 2) We introduce a progressive interaction enhancement mechanism (implemented by several IEMs) to provide support for the boundary assignment process. Both of these contributions are equally important for our work.\n\nIn addition, due to the page limit of the paper, we provide the visualization of Table 7-10 in the Supplementary Material, as detailed in Figure S8, Figure S10, and Figure S11.\n\n\n**Q#:** The IEM is built upon adding more learning units through the spatial self-attention mechanism. I wonder if adding parameters, and/or depth can improve the IoU and S-measure on the FG test set. Can you comment on the number of parameters added by the IEM units to the ResNet50, and speculate about performance of deeper segmentation models?\n\n**A#:** Following your comments, we explored the relevant hyperparameters of IEM on the normal-level dataset of the Figure-Ground Segregation Test. We explored the relative positional embedding range of the CLI part of the IEM, a parameter that controls the scale of the IEM for modeling local contexts. As shown in the table below, the performance improves further as the relative position embedding range increases, but the magnitude of the improvement becomes diminished. Moreover, to further reduce the number of parameters and computation in IEM, we incorporate a bottleneck strategy similar to that in ResBlock, where we explore the impact of the scaling ratio of bottlenecks in IEM on performance. Experimental results show that the smaller the scaling ratio (indicating a larger number of parameters), the performance can be improved, but the improvement is limited. Details of the python implementation of IEM are provided in the supplementary material (Figure S2).\n\nThe relative positional embedding range (r) w.r.t performance.\n| r | 3 | 5 | 7 | 9 |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| S_a | 0.813 | 0.816 | 0.818 | 0.818 |\n| IoU | 0.690 | 0.695 | 0.696 | 0.698 |\n\nThe scaling ratio (rate) of bottlenecks in IEM w.r.t performance.\n| rate | 32 | 16 | 8 |\n|:--------:|:--------:|:--------:|:--------:|\n| S_a | 0.816 | 0.819 | 0.819 |\n| IoU | 0.695 | 0.700 | 0.703 |\n\nThe number of parameters added by the IEM units to the ResNet50 is 7.97M, when the relative position embedding range in the CLI is 5, and the compression ratio of the bottleneck is 32. Moreover, a deeper segmentation model means it has a better representation capability, and building upon a better representation, we believe our approach can achieve better performance.\n\n\n**Q#:** Failure case of our method.\n\n**A#:** Thanks for your suggestion. We have presented the failure case in the supplementary material. As shown in Figure S12, our method produces an inaccurate result in the case of excessively complex foreground content. This is probably due to the fact that our FGA model learns configural statistics that do not contain these local trivial details. To cope with this situation, we can add specific network [ref1] or post-processing measures [ref2] used to refine the details behind the existing model.\n\n[ref1] Qin, X., Zhang, Z., Huang, C., Gao, C., Dehghan, M. and Jagersand, M., 2019. Basnet: Boundary-aware salient object detection. CVPR.\n\n[ref2] Krähenbühl, P. and Koltun, V., 2011. Efficient inference in fully connected crfs with gaussian edge potentials. Advances in neural information processing systems.",
" Thank you for the detailed comments. These comments would help us greatly improve the quality of the paper.\n\n**Q#:** The performance gaps between the proposed method and existing methods are marginal in some metrics. \n\n**A#:** Our work aims to explore the F-G assignment mechanism to empower CNN to achieve a robust boundary assignment despite visual ambiguity. And we present a novel Figure-Ground-Aided (FGA) module to learn the configural statistics of the visual scene and leverage it for the reduction of visual ambiguity. To clearly discuss its effectiveness, a cognitive visual testing--Figure-Ground Segregation is presented in Section 3. Similar to the control variable method in physical experiments, Figure-Ground Segregation can reliably evaluate the capability of object segmentation models in boundary assignment, excluding irrelevant factors and some short-cut cues (e.g., context information). The comprehensive experiments adequately demonstrate that our proposed FGA module can facilitate the CNN to learn more efficiently in the reduction of visual ambiguities with low data requirements.\n\nFurthermore, the application tasks serve as validation of the potential of our approach. Experimental results show that we achieve superior results in most metrics for all four applications (COD, PS, LIS, and SOD), and we believe that careful tuning of the parameters will result in better performance. To demonstrate the potential of our method, we have optimized some hyperparameters (learning rate and decay strategy) based on the original model, and the experiments show that the performance of our method can be further improved. Where \"Our*\" represents the performance after optimizing the hyperparameters.\n\nThe results on CHA Dataset.\n| | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [10] | 0.888 | 0.942 | 0.816 | 0.030 |\n| [67] | 0.888 | 0.945 | 0.796 | 0.031 |\n| Our | 0.898 | 0.945 | 0.839 | 0.032 |\n| Our* | 0.902 | 0.949 | 0.840 | 0.031 |\n\nThe results on CAM Dataset.\n| | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [10] | 0.820 | 0.882 | 0.743 | 0.070 |\n| [67] | 0.785 | 0.859 | 0.686 | 0.086 |\n| Our | 0.793 | 0.858 | 0.745 | 0.078 |\n| Our* | 0.803 | 0.871 | 0.748 | 0.068 |\n\nThe results on COD Dataset.\n| | S | E | F | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [10] | 0.815 | 0.887 | 0.680 | 0.037 |\n| [67] | 0.818 | 0.850 | 0.667 | 0.035 |\n| Our | 0.818 | 0.888 | 0.683 | 0.034 |\n| Our* | 0.821 | 0.895 | 0.687 | 0.031 |\n\nThe results on COVID-19 Dataset.\n| | Dice | Sen | Spec | S | E | M |\n|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\n| [13] | 0.682 | 0.692 | 0.943 | 0.781 | 0.720 | 0.082 |\n| [20] | 0.700 | 0.751 | - | - | 0.860 | 0.084 |\n| Our | 0.735 | 0.720 | 0.965 | 0.792 | 0.900 | 0.062 |\n| Our* | 0.754 | 0.748 | 0.973 | 0.799 | 0.911 | 0.056 |\n\n[10] D.-P. Fan, G.-P. Ji, M.-M. Cheng, and L. Shao. Concealed object detection. IEEE transactions on pattern analysis and machine intelligence, 2021.\n\n[13] D.-P. Fan, T. Zhou, G.-P. Ji, Y. Zhou, G. Chen, H. Fu, J. Shen, and L. Shao. Inf-net: Automatic covid-19 lung infection segmentation from ct images. IEEE transactions on medical imaging, 2020.\n\n[20] G.-P. Ji, L. Zhu, M. Zhuge, and K. Fu. Fast camouflaged object detection via edge-based reversible re-calibration network. Pattern Recognition, page 108414, 2021.\n\n[67] F. Yang, Q. Zhai, X. Li, R. Huang, A. Luo, H. Cheng, and D.-P. Fan. Uncertainty-guided transformer reasoning for camouflaged object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4146–4155, 2021.\n\n\n**Q#:** The best numbers are not always marked in bold which is missleading, e.g. second row of \"Figure 9\", second row of \"Figure 10\". The main experimental result tables are titled as Figures, such as Figure7, 8, 9, 10. To me, they are tables rather than fingures.\n\n**A#:** As the reviewer pointed out, there are some annotation errors in \"Figure 9\" and \"Figure 10\". We have checked and corrected them in the rebuttal version. Furthermore, we have modified the title of the experimental results section from \"Figure\" to \"Table\". Please refer to our rebuttal revision. ",
" We sincerely thank the reviewers for their thoughtful reviews. We have provided detailed responses in individual responses to each reviewer and incorporated feedback in the revised version. We believe that these revisions will greatly improve the manuscript. You can find the version of the manuscript by clicking on the \"Show revisions\" link. The link contains a revised manuscript and supplementary material.",
" This paper focuses on perceptual organizatio where the goal is to perceive and group the individual visual element. To address the visual ambiguity in the discrimination process, authors propose to use e figure-ground assignment mechanism to improve deep neural networks to achieve a robust perceptual organization. Specifically, a Figure-Ground-Aided (FGA) module is presented to learn the configural statistics of the visual scene and leverage it for the reduction of visual ambiguity. An Interactive Enhancement Module (IEM) is designed to leverage configural cues from FGA to enhance the representation learning. Experiments are conducted on four applications. Strength:\n\n+ The idea of taking the inspiration from the perceptual organization of human vision and studying the figure-Ground cues is interesting. The motivation is also well explained. \n\n+ Experiments are conducted on different robust perception organization tasks, such as camouflaged object detection, polyp segmentation, and lung infection segmentationm. The results validate that the proposed method can help improve the performance of robust perception organization tasks. \n\n+ The proposed Figure-Ground Segregation test can help to investigate the performance of models in Figure-Ground assignment.\n\n\nWeakness:\n\n- The performance gaps between the proposed method and existing methods are marginal in some metrics. The best numbers are not always marked in bold which is missleading, e.g. second row of \"Figure 9\", second row of \"Figure 10\".\n\n- The main experimental result tables are titled as Figures, such as Figure7, 8, 9, 10. To me, they are tables rather than fingures. Please refer to the weakness part in theStrengths And Weaknesses section. Authors discussed the limitations of the method in the main paper and the appendix. ",
" The paper proposes a module for enhancing figure-ground assignment, and which can be used to extend existing encoder-decoder based segmentation models. More specifically, the authors implement mechanisms for measuring two figure-ground cues, namely ‘convexity’ and ‘lower region’. These two cues come from the cognitive vision literature, implemented via morphological operators, and inserted into the encode-decoder CNN architecture. Experimental evaluation of known CNNs augmented with this novel module are then performed on challenging segmentation tasks, e.g., those of medical imaging, indicating an improvement of accuracy in most tasks. Strengths: \n•\tFigure-ground assignment is indeed a fundamental process for computer vision algorithms, which is yet not fully explored. I like the idea of adding FG mechanisms to the current FCN segmentation models.\n•\tThe FG segregation test is an important test, which is indeed popular on the human vision literature. I agree that segmentation models such as DeeplabV3 should be able to replicate human behavior in this test set, with and without additional training from a similar set. \n\nWeaknesses:\n•\tThe paper is a bit hard to follow, and several sections were needed more than one reading pass. I suggest improving the structure (introduction->method->experiments), and put more focus on the IEM in Fig 3, which is in my view the main figure in this paper. Also, to improve the visualization of the Fig 7, and Fig. 10. \n The IEM is built upon adding more learning units through the spatial self-attention mechanism. I wonder if adding parameters, and/or depth can improve the IoU and S-measure on the FG test set. Can you comment on the number of parameters added by the IEM units to the ResNet50, and speculate about performance of deeper segmentation models? It will be good to exemplify few failure cases of your model (e.g., on the FG or medical datasets). Perhaps other FG factors are needed? (e.g., good continuation?). ",
" The paper proposes a Figure-Ground-Aided (FGA) module to improve the performance of existing CNN models in figure-ground segregation. In particular, the method considers two figure-ground assignment cues, i.e., convexity and lower-region, and derives two supervisory signals automatically from ground-truths for network supervision. An interactive enhancement module is proposed to enhance feature representations. The results look good. Strengths\n\nThis work is on addressing figure-ground segregation of situations with high visual ambiguity. The paper overall is well organized and easy to read in most sections (not in the technical part). Experiments on a synthetic benchmark and four practical applications are surely extensive and the results look good to me.\n\nWeaknesses\n\nWhile it makes sense to me to formulate perceptual theory into neural networks, the motivation to use the specific two cues (convexity and lower-region) instead of many other cues (size, surroundedness, orientation and contrast, symmetry, parallelism, meaningfulness) as mentioned in Related Work (Line 343) is not quite clear to me. The two cues explored here have also been explored previously, e.g., in [ref1,ref2], thus the novelty seems to be limited. Regarding the performance, even though that the authors argue that \"existing methods degrades significantly when deployed in cases with complex visual ambiguity\", I found from Tables 7-10 that the improvements are very minor in most datasets. Thus, I am concerning the core motivation of the method and the effectiveness of the technical implementations. Last, the formulations in Section 2 are not well presented. Though I can get the main point, the poor formulation makes it hard for me to understand all the details. My detailed questions are given in the next section.\n\n[ref1] Sundberg, Patrik, et al. \"Occlusion boundary detection and figure/ground assignment from optical flow.\" CVPR 2011\n\n[ref2] Lu, Yao, et al. \"Salient object detection using concavity context.\" ICCV 2011.\n - I am particularly concerning whether the lower-region cue is well connected to the challenge studied in the work. More discussions are demanded. In addition, I am wondering how will the model perform by directly using a edge-aware loss like HED [ref3] instead of $\\mathcal{L}_{LR}$.\n\n- I am unsure about the meaning of \"stronger supervisory signals\" in Line 89. Why is it a weak form by \"directly using mask labels as supervision\"? This is not convinced to me since many competitors in Tables 7-10 only use ground-truths as supervision, and still obtains very promising performance. \n\n- The statement \"figure-ground assignment ... contributes almost all perception-based tasks\" seems an overclaim, or more discussions should be provided to support the argument. \n\n- So many errors in only seven formulas:\n - Eq.1 and Eq.2 are not well written. What does the symbol $\\Leftarrow$ refer to? \n - It is unclear the meaning fo $GT_{L_2}$ in Eq. 2.\n - How are $K_c$ and $K_L$ determined? Do they set to the same values across the datasets? How will the different values affect the performance?\n - L184 only gives the definition of $Q^{LR}$, however, strictly speaking, $Q^{Con}$ is not defined.\n - It is not clear what the $E$ in Line 187 refers to. \n - $\\mathcal{L}_M$ is not defined. Should it be $\\mathcal{L}_D$ in Line 193? \n - It is also not clear to me how $M$ in Eq. 5 will be used in the network. \n - The $\\texttt{Lambda}$ in Eq. 3 is very similar to a self-attention operator. Why is $\\texttt{Lambda}$ used here? \n - The symbols $Cue_{LR}$ and $Cue_{Con}$ shown in Fig. 3 are not consistent with symbols in Section 2.\n - $\\{D_i\\}_{i=1}^5$ in Fig.3 are also not defined.\n - What does the $P_d$ refer to? The symbols is potentially misused since it is defined to denote position embedding in Line 183.\n\n- Typos or grammar errors:\n - \"fully connection layer\" should be \"fully connected layer\"\n - \"k-th feature\" should be \"the k-th feature\"\n\n[ref3] Xie, Saining, and Zhuowen Tu. \"Holistically-nested edge detection.\" ICCV 2015.\n\n No limitation is discussed. ",
" This work proposes a representation learning method to achieve the Perceptual Organization process implicitly. The authors design a new module named as Figure-Ground-Aided (FGA), in which the convexity and lower region cues are leveraged by the Interactive Enhancement Module (IEM). The model takes traditional CNNs as the backbone. The optimization utilizes segmentation labels and binary cross-entropy (BCE) loss. In the optimization, the convexity and lower region cues derived by segmentation labels are also used as the supervision. Experiments based on the synthetic and realistic datasets are designed to verify the effectiveness. Overall, the motivation of this paper is clear and the designed model is technically sound. Moreover, some claims of this paper are not factual. Strengths:\n1. The originality is good. This work is inspired by cognitive researches. The motivation is clear and interesting, while the designed model is novel and technically sound.\n2. The paper is organized well and the statements are clear.\n3. The significance is enough. This work aims to learn visual representations being discriminative to the confusing objects and backgrounds. This is an essential problem in computer vision.\n\nWeaknesses:\n1. Some necessary implementation details are missing:\n (1) The model architecture is not introduced completely. In Figure 3, the intermediate features E^Con_4, ..., E^Con_1 and E^LR_4, ..., E^LR_1 are derived from the feature E_5. However, there is no description for this process.\n (2) The size of structure element K_C is 10 while that of K_L is 5. Why authors select these hyperparameters?\n (3) In the ablation experiments, it is not clear how to implement the variant models (a), (b) and (c) without the IEM module.\n\n2. In the main paper, there is no direct evidence verifying that the proposed model can reducing the visual ambiguousness in realistic datasets. Although a large number of realistic datasets are used in the comparison experiments, only the overall quantitative results are shown in the paper. The improvements of these quantitative results may be unrelated to the reduction of visual ambiguousness. \n Therefore, the ablation experiments on the realistic datasets are also necessary. Note that the experimental results on the synthetic datasets cannot be considered as a direct evidence for this problem.\n\n3. Some claims of this paper are not factual.\n (1) In line 297, authors claim that “FGA-Net achieves significant performance improvement compared to the second-best method UGTR [67]”. However, UGTR [67] should not be the second-best method according to the results in Figure 7. For example, SINetv2 [10] is better than UGTR [67] for most of the metrics. Moreover, the superiority of the proposed model is not very significant compared with SINetv2 [10].\n (2) In line 325, authors claim that “Quantitative results are listed in Table 8. We can observe that our model consistently outperforms all other contenders across all metrics”. However, the proposed model is inferior than [20] for the metric Sen.\n 1. Why authors design the experiments on the synthetic dataset? If the realistic datasets may include some other irrelevant factors for Figure-Ground Segregation, authors should explain this problem explicitly.\n2. There are some mistakes in the notations and expressions:\n(1) In Eq. (7), the notation L_M has no explanation. \n(2) In line 277, authors state that “We verify the effectiveness of our proposed method on four challenging visual tasks”. However, the following statement only mention three tasks (COD, PS, LIS).\n(3) In Figure 7, authors state that “Comparison with 10 SOTA methods ...”. However, only six comparison methods are involved.\n(4) In the second line of Figure 10, the best result is not 0.858 but 0.862. The bold style is marked incorrectly. Authors should check this kind of mistake carefully.\n(5) In Figure 9, the results of [66] on two different datasets are identical. Please check it.\n(6) In line 148, the subscript i has no explanation. \n(7) In line 193, the notations “i\" and “I” has no explanation. \n(8) Figures 7, 8, 9 and 10 should be marked as tables.\n The authors give the societal impact in Section 6. However, there is no limitations discussed in this section. The reviewer believes that the requirement of segmentation labels should be a limitation. Specifically, the traditional object detection models only require box labels, which is cheaper than segmentation labels."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"BhFkAzAi2lh",
"2x2dLIW3aFH",
"IgdyH_89gVq",
"LcqzRA4sm6g",
"HnduKggnUOL",
"IgdyH_89gVq",
"MC39_CEdxtl",
"7hfkIet67f",
"hyNGpKKinTr",
"shp0GNjPmD",
"nips_2022_c3HrNgQE7d",
"nips_2022_c3HrNgQE7d",
"nips_2022_c3HrNgQE7d",
"nips_2022_c3HrNgQE7d",
"nips_2022_c3HrNgQE7d"
] |
nips_2022_8li9SYYY3eQ | Language Conditioned Spatial Relation Reasoning for 3D Object Grounding | Localizing objects in 3D scenes based on natural language requires understanding and reasoning about spatial relations. In particular, it is often crucial to distinguish similar objects referred by the text, such as "the left most chair" and "a chair next to the window". In this work we propose a language-conditioned transformer model for grounding 3D objects and their spatial relations. To this end, we design a spatial self-attention layer that accounts for relative distances and orientations between objects in input 3D point clouds. Training such a layer with visual and language inputs enables to disambiguate spatial relations and to localize objects referred by the text. To facilitate the cross-modal learning of relations, we further propose a teacher-student approach where the teacher model is first trained using ground-truth object labels, and then helps to train a student model using point cloud inputs. We perform ablation studies showing advantages of our approach. We also demonstrate our model to significantly outperform the state of the art on the challenging Nr3D, Sr3D and ScanRefer 3D object grounding datasets. | Accept | Reviewers where in agreement that the method and manuscript are strong and provide a valuable connection between 3D perception and language.
The evaluation suffers somewhat from the fact that there are no good datasets targeted at this problem. Authors mitigate this by performing a thorough evaluation with numerous ablations/alternatives that demonstrate their method works as intended.
Beyond just this application, the general approach taken of incorporating priors about a problem domain into a self-attention layer for a Transformer to combine multimodal input is timely and will be of wide interest.
I encourage the authors to update the manuscript and include the additional experiments and results they produced for reviewers.
| train | [
"5eKBg8K3OPL",
"FXQPRhynjO",
"V_bDbGLeap",
"Lg_JbI12kiv",
"ipNSHgKXbHd",
"7J_h_iFInSw",
"__TrIvMY7XR",
"pSOYWwp_N4",
"02Vdl4VPUS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed response. The discussions about baselines and the language encoders are helpful. Since there is a huge performance gap between different language encoders, I would suggest the authors add this discussion to the main paper. Overall, I am satisfied with the author's responses and would keep my rating as acceptance.",
" We thank the reviewer for acknowledging our contributions and for constructive comments to our work.\n\n**Q1: In equation 3, it is not very intuitive what $g$ (language conditioned weight) means. What is the role of the classification token $s_{cls}$ being an input in the conditioning?**\n\n$g$ in Eq (3) is a weight to be multiplied with pairwise spatial feature $f$ in Eq (4). We care more about the spatial relations mentioned in the language for an object. For example, in the sentence “the bag closest to the piano”, we would like to obtain a weight $g$ for the object “bag” that is able to filter the distance relation. Therefore, we use the global sentence token $s_{cls}$ and the object token $o_i$ to predict the weight. \n\n**Q2: Related to the previous point, in Figure 2 (left), the Text BERT serves as input to both the spatial self-attention and cross-attention. It is not clear if the self-attention takes the text features $(s_1, …, s_M)$ as input or only takes the $s_{cls}$ token.**\n\nThe spatial self-attention layer only applies self-attention over object tokens. The text features $(s_1, …, s_M)$ are not the input. Only the $s_{cls}$ token is used as in Eq (3).\n\n**Q3: In line 189, the authors state that the teacher model’s input object representation is the sum of object class label embedding and the averaged color embedding. Does the sum introduce any difficulty during training, especially when it requires the model to use object appearance (such as color) rather than spatial relationships to locate objects?**\n\nThe sum operation is standard in existing works [17]. Given the high capacity of the model, it is possible to learn a manifold where the sum operation is effective to fuse the two embeddings.\n\n**Q4: It is nice to see the visualization of the teacher model in Figure 4. It will be compelling if the authors show that the student model attention layers behave similarly.**\n\nWe observe that the attention layers in the student model gradually converge to the teacher’s behavior thanks to the knowledge distillation losses in Eq (8). We will provide the training curves of the distillation losses in the updated version which measure the similarities of attention weights between teacher and student.\n\n**Q5: The loss function in equation 8 is very complicated. There is little detail about the auxiliary losses and the rationale for including them in the objective.**\n\nThe auxiliary losses $L_{sent}$ and $L^*_{obj}$ are standard in previous works [7,9,13,14,16] which have shown to be beneficial for the performance. They are all cross entropy losses. $L_{sent}$ is to predict the target object class from the sentence. $L^*_{obj}$ is to predict the object class for each object token. We will clarify it in the updated version.\n\n**Q6: The final model only marginally outperforms 3D-SPS, according to Table 5. Although the authors state that the comparison is not fair due to detection optimization or single-stage pipeline, it would be helpful to reflect on how to improve the current detected object proposal.**\n\nFor fair comparison with previous two-stage methods, we use the PG object detector [42]. The state-of-the-art works on 3D object detection/segmentation are also based on transformer architectures. Our proposed spatial relation module can be applied in such transformer architectures to enhance the contextual modeling. Moreover, the textual information can be applied to address the few-shot or zero-shot object detection. However, we leave this to future work.\n\n**Q7: It would be helpful to include more details about the cross-attention computation in the model.**\n\nThe cross-attention layer follows the standard transformer work [17]. The queries are the object tokens, while keys and values are the textual tokens. We will provide more details of the cross-attention computation in the supplementary material. We will also release the codes and trained models if the paper is accepted.\n",
" We thank the reviewer for the constructive comments to our work.\n\n**Q1: The magnitude of contributions might be slightly limited since it follows a standard pipeline. The novel part of this paper can, to some extent, be seen as training tricks / inductive biases to help the model better picking up certain information (spatial relation) that is important to the task.**\n\nWe contend that spatial relation modeling is important for 3D object grounding. As pointed out by Reviewer yLyV, “The described spatial self-attention mechanism is to my knowledge a novel contribution in this area”.\nWe would like to emphasize that the proposed teacher-student training method is also important. The main contribution of the previous SAT [13] is to use additional 2D images to assist the training in 3D object grounding. Our teacher-student training does not rely on such additional supervisions and outperforms existing approaches.\n\n**Q2: It might be argued that the techniques introduced in this network might be limiting eventually if solutions emerge to better capture the spatial relations from the object attributes alone.**\n\nWe contend that improving object attributes is complementary with spatial relation modeling. In Table 1, although we use the ground-truth object labels and colors which are strong object representations, the proposed spatial relation modeling still achieves significant improvements over the baseline.\n\n**Q3: Ablations only justify that each of the components help with the final result, without discussion of alternative methods that might achieve the same goal. I think this is understandable though - should not expect a work that explores an idea to exhaustively study the design space.**\n\nThough we do not extensively explore the design space, we carefully select and compare against two representative approaches to model spatial information in Table 1 (bias and ctx [34]) and show that our fusion strategy outperforms them. \n\n**Q4: What is the motivation of computing Euclidean distance from the centroids, instead of between the bounding boxes? It seems to me that distance between bounding boxes might describe proximity between objects more unambiguously as opposed to centroid-to-centroid distances.**\n\nWe compute the distance from the centroids as it is simple and effective as shown in Table 1. The distances of centroids are sufficient to distance related phrases (see keywords in the reply to Reviewer yLyV). It would be more complicated and require more ad-hoc design to compute distances between bounding boxes. \n\n**Q5: Would it be better to computer vertical angle based on the center of the bottom face?**\n\nWe follow the suggestion and use the bottom center of objects to compute the pairwise vertical angles. The result is in the second row of the table below. Using the bottom center achieves the same performance as using the object center, which suggests that our original design is sufficient to capture the vertical spatial relations of objects.\n\n| | Overall |\n|---|:---:|\n| object center | 74.4 |\n| bottom center | 74.4 |\n| object bounding boxes + MLP | 57.4 |\n\n**Q6: I wonder what's the benefit of hardcoding spatial relations as opposed to, say, using a MLP to automatically compute spatial relations from the bounding boxes of the objects, probably in their local coordinate frame?**\n\nThe result of computing the spatial relations from bounding boxes with a MLP is presented in the last row of the table above. We can see that it achieves much worse performance than our designed spatial relation features. This indicates that it is challenging to implicitly learn pairwise spatial relations and our designed features are beneficial.\n\n**Q7: I imagine many text descriptions are probably related with the room architecture itself. Would there be benefit in also trying to encode object-room relations explicitly?**\n\nWe observed that the object-room relations are relatively simple such as “in the corner of the room” etc. Such information is encoded in the absolute positions of the objects.\n\n**Q8: Have you tried to more explicitly handle orientations when defining and encoding the spatial relations?**\n\nWe have not explored object orientations in our work as the corresponding ground truth was not available for datasets considered in our work and as the automatic estimation of object poses remains a challenging problem. We believe using object orientations could bring further improvements and we leave this direction for future work.\n\n**Q9: More discussion of limitations specific to this method would help.**\n\nWe could discuss more specific limitations of our model in the updated version such as missing explicit object orientations.\n",
" We thank the reviewer for detailed and constructive comments. We address the raised points in the following.\n\n**Q1: The base architecture (R1 in Table 1) is not an existing baseline model. To better understand the effectiveness of the proposed modules, the authors should consider integrating the presented techniques with other baselines too.**\n\nIn Table 1, we carry out extensive ablations of the newly proposed methods on top of our baseline model (R1). The results demonstrate the effectiveness of our proposed methods in a fair comparison setting.\nMoreover, our base architecture (R1) is similar to existing works that are based on transformers such as LanguageRefer [15], 3DVG-Transformer [13], SAT [14] and Multi-view transformer [16]. We argue that the slight differences in the base architectures do not have a large influence on the performance. For example, LanguageRefer [15] reported the performance under the same setting as our Table 1 which uses ground-truth object labels on the Nr3D dataset. We can see that the performance of the two models is also similar. Therefore, we believe that the improvement of our model compared to previous transformer-based methods is unrelated to the base architecture.\n\n| | Overall | ViewDep | ViewIndep |\n|---|:---:|:---:|:---:|\n| R1 Tab 1 (Our baseline) | 53.5 | 51.4 | 54.6 |\n| LanguageRefer [15] | 54.3 | 49.1 | 56.8 |\n\n**Q2: L159: Can you be more specific about the horizontal and vertical angles?**\n\nFor two objects A and B, assume that the coordinates of the object center are $(x_a, y_a, z_a)$ and $(x_b, y_b, z_b)$. We can compute the Euclidean distance between A and B as $d$. The horizontal angle $\\theta_h$ is $arctan2(\\frac{y_b - y_a}{x_b - x_a})$ and the vertical angle $\\theta_v$ is $arcsin(\\frac{z_b - z_a}{d})$. In other terminology, the horizontal angle corresponds to the azimuth direction while looking from point A to B, while the vertical angle corresponds to the elevation.\n\n**Q3: L165: Why do you use sigsoftmax, in contrast to, for example, softmax?**\n\nWe tried different ways to encode the spatial information. The R7 and R8 in Table 1 indeed use the softmax function. Empirically, the softmax variants perform worse than our proposed sigsoftmax fusion. The reason might be that sigsoftmax can more aggressively modify the original self-attention weights with spatial information.\n\n**Q4: L235: Is there any reference on using only the first 3 layers of BERT to encode the sentence? Why don't you use all layers? Also, it seems that some baselines (InstanceRefer) use (perhaps) weaker textual encoders (GloVe+GRU), can you provide an ablation study on the textual encoder module?**\n\nWe use the first three layers of BERT following the setup in more recent works such as SAT [13] and Multi-view transformer [16]. Here we provide additional results of different textual encoders under the setting in Table 1. We can see that more BERT layers do not lead to better performance. The pretrained BERT model achieves much better performance than GRU model trained from scratch. We will include these ablations in the final version.\n\n| | Overall |\n|---|:---:|\n| BERT (first 3 layers) | 74.4 |\n| BERT (first 6 layers) | 73.7 |\n| BERT (first 9 layers) | 73.2 |\n| BERT (all 12 layers) | 52.3 |\n| Glove+GRU (3 layers) | 45.7 |\n\n**Q5: L239/L205: If the pointnet is pretrained and fixed, why are you adding $L^u_{obj}$?**\n\nThese auxiliary losses are the same for the teacher and student models, so we simply add the term $L^u_{obj}$ in the final loss. It will not influence the pointnet in the student model. We will clarify it in the updated version.\n\n**Q6: L240: The author should explain the details of the rotation augmentation.**\n\nThe rotation augmentation is to rotate the whole point clouds by different angles. Specifically, we randomly select one from four angles [0°, 90°, 180°, 270°].\n\n**Q7: What are the limitations of this paper?**\n\nWe briefly mentioned the limitation in the conclusion section. Our model is a two-stage framework and thus is limited by imperfect object proposals in the first detection stage. As kindly mentioned by Reviewer yLyV, there also exists an inherent dataset bias.\n",
" We thank the reviewer for providing constructive comments. We are happy that the reviewer acknowledged our novel contribution in 3D spatial relation modeling and thorough experiments.\n\n**Q1: More systematic analysis of results. 1) A comparison of overall performance on sentences with and without spatial relations. 2) A breakdown by spatial relation types (involving distance, involving orientation, and involving both).**\n\nIn order to carry out more systematic analysis, we categorize sentences into four groups: 1) *Dist only* which only contains distance descriptions; 2) *Ort only* which only contains orientation descriptions; 3) *Dist & Ort* which contains both distance and orientation descriptions; and 4) the *others* which do not contain spatial relation descriptions. \n\nAs the existing datasets do not have such categorization, we first manually select keywords among top words in the dataset that are relevant to distances or orientations, and consider a sentence has distance or orientation descriptions if it has any of those keywords. Specifically, the distance related keywords are: 'far', 'farther', 'farthest', 'close', 'closer', 'closest', 'next', 'near', 'nearest', 'beside', 'between', 'middle', and the orientation related keywords are: 'front', 'behind', 'back', 'right', 'left', 'leftmost', 'rightmost', 'above', 'under'. \nThe data percentages are presented in the first row of the table below.\n\nWe provide additional analysis for the results of different models in Table 1 in the following table. Comparing R3 and R4 we can see that the explicit pairwise distance modeling contributes most to the distance only sentences but has little influence on orientation related sentences. The pairwise orientation modeling instead can dramatically improve orientation related sentences by 10.5%. Combining both pairwise distance and orientation modeling in R9 achieves the best performance on all categories. Thanks to the reviewer for suggesting this experiment, we will include these results in our updated version. \n\n| | Dist | Ort | Overall | Dist only | Ort only | Dist & Ort | Others |\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| data percentage | | | 100 | 33.8 | 27.4 | 9.8 | 29.0 |\n| R3 | x | x | 62.4 | 63.5 | 61.2 | 57.7 | 63.9 |\n| R4 | ✔️ | x | 66.0 | 72.6 | 58.9 | 55.1 | 68.8 |\n| R5 | x | ✔️ | 71.3 | 73.8 | 71.7 | 67.7 | 69.1 |\n| R9 | ✔️ | ✔️ | 74.4 | 77.8 | 74.0 | 69.1 | 72.6 |\n\n**Q2: Limitation on inherent dataset biases.**\n\nThank you for pointing out this limitation. We agree that the existing datasets might not represent a rich diversity of environments as they are all built upon the ScanNet dataset with about 1500 scenes. We will mention the possible biases in the updated version.\n",
" The paper proposes a method for language grounding in 3D scenes (i.e. detecting a specific 3D object instance based on an input natural text describing the object in the context of the scene). The method is based on a transformer architecture, with the key novelty lying in a spatial self-attention mechanism and a teacher-student distillation training setup.\n\nThe method is evaluated on the ScanRefer and ReferIt3D datasets, with comparisons against ablations to demonstrate the value of method components, and also against baselines taken from prior work to demonstrate the relative improvement in 3D object grounding. The experiments report grounding accuracy and show measurable improvements over prior work. Strengths\n+ The paper is clearly motivated in the need to better model 3D spatial relations in the context of the language grounding problem. The described spatial self-attention mechanism is to my knowledge a novel contribution in this area\n+ The exposition is overall quite clear, and the experiments present a thorough spectrum of ablations that do a good job of analyzing the impact of different method components (in particular, the attention to contrasts between view-dependent vs view-independent text inputs is appreciated)\n\nWeaknesses\n- Given the big focus on spatial relations, I would have liked to see a more systematic analysis of the results dependent on the need for attending to spatial relations to disambiguate the specific object instance that the input text describes (as illustrated well in Fig 1). For example, a comparison of overall performance on sentences like those seen in Fig 1 vs sentences that do not use spatial relations to disambiguate would offer more detailed evidence that the proposed architecture is indeed better able to ground objects in such cases. In addition, a breakdown by spatial relation types (involving distance, involving orientation, and involving both) would be quite informative. I would like the authors to address the identified weakness above, at least in terms of the broad strokes of how the approach performs for input text exhibiting different degrees of spatial relation disambiguation. The conclusion mentions the two-stage nature of the approach as a limitation (i.e. that object detections are assumed to be given, and may be imperfect). The conclusion also briefly states that the presented work only has minimal potential negative societal impact. I would agree with this statement given the domain (3D interiors), but I would still point out that inherent dataset biases may misrepresent the rich diversity of environments in which people around the world may live.",
" This paper focuses on improving spatial relationship reasoning in the task of 3D object referring expression comprehension. The key ideas are a dedicated spatial attention mechanism integrated into the basic transformer architecture, and a teacher-student training mechanism. The paper significantly outperforms existing baselines on ReferIt and ScanRefer datasets. The paper is clearly motivated and well written. The model is based on a straightforward transformer architecture, and integrates spatial attention models. Although I would argue that the color feature and spatial relationship feature used in the paper is a little ad-hoc but it is simple and effective. I have only very few questions and suggestions about the weakness of the paper.\n\nPerhaps the biggest weakness of the paper is that, based on my understanding, the base architecture (e.g., the R1 model in Table 1) is not an existing baseline model. Thus, the authors are both changing the basic architecture (c.f., InstanceRefer and Multi-View) and add-ons (teacher-student, spatial attention). To better understand the effectiveness of the proposed modules, the authors should consider integrating the presented techniques with other baselines too.\n\n L159: Can you be more specific about the horizontal and vertical angles?\n\nL165: Why do you use sigsoftmax, in contrast to, for example, softmax?\n\nL235: Is there any reference on using only the first 3 layers of BERT to encode the sentence? Why don't use all layers? Also, it seems that some baselines (InstanceRefer) use (perhaps) weaker textual encoders (GloVe+GRU), can you provide an ablation study on the textual encoder module?\n\nL239/L205: If the pointnet is pretrained and fixed, why are you adding $L_{obj}^u$?\n\nL240: The author should explain the details of the rotation augmentation.\n What are the limitations of this paper?",
" This paper proposes a new method for 3D object grounding with a transformer that combines input of a BERT text encoder and a Point++ object encoder to match the detected object proposals with the input sentence description.\n\nThe key insight here is that explicitly encoding spatial relations between detected objects significantly help with spatial reasoning. Based on this observation, the method incorporates a spatial self-attention layer that computes spatial relevance from language inputs and the explicitly defined pairwise object relations (of relative distance and angle). This spatial relevance is then used to weight the self attention between objects. \n\nCombining this novel spatial reasoning layer with teacher-student knowledge that distills knowledge from a teacher network with access to ground truth labels and color of objects, the proposed method is able to achieve significantly better performance than prior works on multiple benchmarks, where reasonable control of variables are attempted. Thorough ablation are also performed to justify the importance of each of the novel design choices.\n First and foremost: I have not worked on the problem studied in this paper and was not familiar with the related works until reviewing this paper. So this is more of a common-sense review judging from the technical soundness and quality of evaluation, as well as quick skim through of some of the relevant works discussed. I'll defer the judgment about the magnitude of contribution to reviews who are more familiar with this problem.\n\nThat being said, I like this work in general: it has a very clear motivation (improving spatial reasoning for 3D object grounding) and proposes an effective solution that clearly outperforms relevant prior works. My concern about this paper is largely around the magnitude of the contribution and if that have significant impact on future works. Again, I don't think I am really qualified to judge about this, but based on other points below, I lean towards accepting this paper.\n\n### Strengths:\n+ Simple yet important key observation about the importance of explicitly encoding pairwise spatial relations, solid design of a novel module based on this observation.\n+ A generalizable teacher-student training method that can be extended to other methods for the problem studied.\n+ Exceptional results: significantly outperforms relevant prior works on multiple benchmarks and even outperforms works with more assumptions on input.\n+ Extensive ablations validating each of the design choices.\n+ Good writing quality, easy to follow.\n\n### Weaknesses:\n- Magnitude of contribution might be slightly limited since it follows a rather standard pipeline. The novel part of this paper can, to some extent, be seen training tricks / inductive biases to help the model better picking up certain information (spatial relation) that is important to the task.\n- Following the previous point: it might be argued that the techniques introduced in this network might be limiting eventually if solutions emerge to better capture the spatial relations from the object attributes alone.\n- Ablations only justify that each of the component help with the final result, without discussion of alternative method that might achieve the same goal. I think this understandable though - should not expect a work that explores an idea to exhaustively study the design space. ### Design Choices\n- What is the motivation of computing Euclidean distance from the centroids, instead of between the bounding boxes? It seem to me that distance between bounding boxes might describe proximity between objects more unambiguously as opposed to centroid-to-centroid distances.\n- Similarly, it seems that the vertical angles between objects might be very sensitive to height of the objects. Would it be better to decouple the impact of dimension e.g. by computing this angle based on the center of the bottom face?\n- I wonder what's the benefit of hardcoding spatial relations as opposed to, say, using a MLP to automatically compute spatial relations from the bounding boxes of the objects, probably in their local coordinate frame?\n- I imagine many text descriptions are probably related with the room architecture itself. Would there be benefit in also trying to encode object-room relations explicitly?\n- I know its mentioned in the supplementary that determining the orientations from noisy inputs is challenging. Still, it appears to be that most natural language descriptions of directions (left, right, etc) is highly depending on the orientation of the objects, so the model will need to reason about orientation in some way. Have you tried to more explicitly handle orientations when defining and encoding the spatial relations? If, does inclusion of them degrades the performance because of the ambiguity? If not, what are some possible ways to potential handle such cases?\n\n### Evaluation\n- As mentioned, some discussion of alternative ways to design the spatial relation layer might be helpful, but definitely not necessary.\n Only one limitation that is common to all method of this class is mentioned towards the end. More discussion of limitation specific to this method would help.\nQualitative failure cases are provided in the supplementary, with analysis. One example (Supp Figure 2, right) shows a weakness specific to this method regarding object orientation.\n\nI agree that there is no apparent negative social impact.",
" This paper proposes a language-conditioned transformer model for localizing objects in 3D scenes and reasoning about relative spatial distances based on natural language description. The model is based on the transformer architecture and includes spatial attention layers and multi-head design to learn and reason about relative distances and orientations among multiple objects. The spatial attention output is fused with standard self-attention by the sigmoid softmax function proposed by the authors. The model is trained using a teacher-student approach accompanied by rotation augmentation to facilitate learning under scarce data and to produce reasonable inference with point cloud inputs. The final model is tested on object grounding tasks Nr3D, Sr3D, and ScanRefer3D, and outperformed state-of-the-art models. Strengths: \n1. This paper addresses a challenging and interesting task of 3D object reasoning based on natural language description, with a focus on distinguishing similar objects referred by the text using different spatial relationships. \n2. Using multi-head attention, the proposed language-conditioned transformer model effectively learns to capture and reason about different spatial relationships among objects in the self-attention layers. \n3. The model is systematically evaluated on multiple datasets and achieves strong performance, and the learned attention is clearly demonstrated in Figure 4. \n4. The ablation study is carefully designed and shows the value of different components the authors introduce in the paper.\n\nWeaknesses: \n1. In equation 3, it is not very intuitive what g (language conditioned weight) means. What is the role of the classification token s_cls being an input in the conditioning? \n2. Related to the previous point, in Figure 2 (left), the Text BERT serves as input to both the spatial self-attention and cross-attention. It is not clear if the self-attention takes the text features (s_1, …, s_M) as input or only takes the s_cls token.\n3. In line 189, the authors state that the teacher model’s input object representation is the sum of object class label embedding and the averaged color embedding. Does the sum introduce any difficulty during training, especially when it requires the model to use object appearance (such as color) rather than spatial relationships to locate objects?\n4. It is nice to see the visualization of the teacher model in Figure 4. It will be compelling if the authors show that the student model attention layers behave similarly.\n5. The loss function in equation 8 is very complicated. There is little detail about the auxiliary losses and the rationale for including them in the objective.\n6. The final model only marginally outperforms 3D-SPS, according to Table 5. Although the authors state that the comparison is not fair due to detection optimization or single-stage pipeline, it would be helpful to reflect on how to improve the current detected object proposal. As mentioned in the weaknesses above, clarifying the language conditioned weight, BERT text feature input, and auxiliary tasks would be helpful. In addition, it would be helpful to include more details about the cross-attention computation in the model. The object proposal in the two-stage pipeline is critical because it constrains the hypothesis space for cross-modal matching. Thus the prospect of improvement in the object proposal stage would be beneficial."
] | [
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
2
] | [
"Lg_JbI12kiv",
"02Vdl4VPUS",
"pSOYWwp_N4",
"__TrIvMY7XR",
"7J_h_iFInSw",
"nips_2022_8li9SYYY3eQ",
"nips_2022_8li9SYYY3eQ",
"nips_2022_8li9SYYY3eQ",
"nips_2022_8li9SYYY3eQ"
] |
nips_2022_dz79MhQXWvg | Weakly supervised causal representation learning | Learning high-level causal representations together with a causal model from unstructured low-level data such as pixels is impossible from observational data alone. We prove under mild assumptions that this representation is however identifiable in a weakly supervised setting. This involves a dataset with paired samples before and after random, unknown interventions, but no further labels. We then introduce implicit latent causal models, variational autoencoders that represent causal variables and causal structure without having to optimize an explicit discrete graph structure. On simple image data, including a novel dataset of simulated robotic manipulation, we demonstrate that such models can reliably identify the causal structure and disentangle causal variables. | Accept | The reviewers were split about this paper: on one hand they would have liked to see better experimental results, particularly for larger graphs, on the other they appreciated the identifiability results and the ILCM algorithm. After going through the paper and discussion I have voted to accept for the following reason: even though the experimental results could be strengthened, papers with novel approaches to long-standing problems are the kind that make NeurIPS an uniquely interesting conference, particularly if those paper have strong theoretical guarantees. I urge the authors to take all of the reviewers changes into account (if not already done so). Once done this paper will be a nice addition to the conference! | train | [
"8xU24BWibI",
"pUgHe97mVR",
"IOQTTar-go",
"JqsQLMPimE3",
"Pg84bQjpf81",
"6jTtkWZLmmC",
"zJ1R7agw4Pc",
"dAypEhMKfu1",
"yZh-u7swWBE",
"B3ZyVz90pgl",
"cIDYGYq4aMw",
"T_W7mNLNzG",
"8iSQB5Cpo9",
"xMhNmUv4C4v",
"Sk_Erlibqp",
"G3svjvZ3WVj",
"wevYiyJVQ0L"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the reply, we are glad to hear that we were able to clarify our approach.\n\nYour questions and the toy example were very helpful. In addition to improving our description of ILCMs along these lines, in the final version of our paper we will also discuss which of our assumptions are required for reduced-form SCMs to capture equivalent information to SCMs, and for ILCMs to be equivalent to ELCMs.",
" I thank the authors for responding and explaining their method in the context of my toy example; this was certainly helpful, and I understand it better now. It would be helpful to include something along those lines as suggested. \n\nI am intrigued and would like to better understand how general this procedure is beyond the considered setting: are there types of interventions for which this does not work? how about causally insufficient systems / dependent noises? how restrictive is the assumption of real-valued variables and bijectivity etc... \n\nBut I understand that this is outside the scope of the current paper, certainly outside the scope of the discussion period, and I do not expect to authors to still reply to this. I will still update my review and tend to increase my score.\n\n",
" We would like to add two more comments.\n\nFirst, we describe how we extract causal mechanisms and graphs in Section C.4 of the supplementary material. The heuristic algorithm (\"ILCM-H\") corresponds to the iterative procedure we described in our previous reply. In the final version of our manuscript, we would describe this in more detail and more prominently in the main paper (especially since we will then have an extra page at our disposal).\n\nSecond, we would like to point out that in practice from pixels we can generate causal variables, infer intervention targets, and generate observational, interventional, and counterfactual samples (in data space) from ILCMs without extracting the causal mechanisms $f_i$ as described above.\nFor instance, given one sample $x$ and an intervention target $I=\\{i\\}$, we can sample from counterfactual distributions $p(\\tilde{x}|x, I)$ by encoding $x$ to $e \\sim q(e|x)$, sampling $\\tilde z_i \\sim \\mathcal{N}$ (or setting to a desired value), setting $\\tilde e_i = s_i^{-1}(e_{\\setminus i}; \\tilde z_i)$ , keeping $\\tilde e_{\\setminus i} = e_{\\setminus i}$, and decoding $\\tilde{x} \\sim p(\\tilde x|\\tilde{e})$. This is what we do in our visualizations.\n\nThe cases that require extraction of the causal mechanisms are:\n- Inferring the graph with the heuristic ILCM-H method\n- Inferring interventions / counterfactuals from causal variables, as this requires us to invert the solution function $s$, which first requires computing the mechanisms.",
" We are very thankful for your detailed comments and the opportunity to explain better than we've done until now. Clearly, we are using some non-standard methodology that we should elaborate on in more detail in the paper. In the final version of this work, we will include a more comprehensive explanation. We'd be happy to use the example you provided as part of that exposition.\n\n### Explicit example\nWe are given the solution function\n$$s: \\mathbb{R}^3 \\to \\mathbb{R}^3 : \\epsilon \\mapsto z = \\begin{pmatrix}1 & 0 & 0 \\\\\\\\ 1 & 1 & 0 \\\\\\\\ 2 & 1 & 1\\end{pmatrix}\\epsilon$$\n\nNow from this function, we can reconstruct the SCM $z_1=f_1(;\\epsilon_1), z_2=f_2(z_1; \\epsilon_2), z_3=f_3(z_1, z_2; \\epsilon_3)$ in the following way.\n\n\nLooking at the Jacobian (here equal to the matrix $s$), we find that the ordering $(1,2,3)$ makes the Jacobian triangular, so can solve in that order.\n\nWe are assuming that any causal mechanism is a bijective function from the noise, conditional on the parents. For simplicity, we write as if the causal mechanisms are functions of all preceeding nodes in the topological ordering and find them to be constant in all non-parents.\nAs a general approach:\n- For the first node, we note that $z_1=f_1(\\epsilon_1)=s_1(\\epsilon_1)$.\n- For the second node, we write $z_2 = f_2(z_1; \\epsilon_2) = s_2(f^{-1}_1(z_1); \\epsilon_2)$.\n- For the third node, we write $z_3= f_3(z_1,z_2; \\epsilon_3) = s_3(f^{-1}_1(z_1), f^{-1}_2(z_1; z_2); \\epsilon_3)$.\n- For the $n$th node, we write $z\\_n= f\\_n(z\\_1,...,z\\_{n-1}; \\epsilon\\_n) = s_n(f^{-1}\\_1(z\\_1), ..., f^{-1}\\_{n-1}(z\\_1, ..., z\\_{n-2}; z\\_{n-1}); \\epsilon\\_n)$.\n\nIn the present example, we infer:\n- $z_1 = f_1(\\epsilon_1)=\\epsilon_1$. Note that the inverse is $\\epsilon_1 = f_1^{-1}(z_1)=z_1$\n- $z_2 = f_2(z_1; \\epsilon_2) = s_2(f^{-1}_1(z_1); \\epsilon_2)=s_2(z_1; \\epsilon_2)=z_1 + \\epsilon_2$. Note that the inverse is $\\epsilon_2 = f_2^{-1}(z_1; z_2)=z_2 - z_1$\n- \\begin{align*}z_3&= f_3(z_1,z_2; \\epsilon_3) = s_3(f^{-1}_1(z_1), f^{-1}_2(z_1; z_2); \\epsilon_3)=s_3(z_1, z_2 - z_1; \\epsilon_3) \\\\\\\\ &=2 z_1 + (z_2 - z_1) + \\epsilon_3 = z_1 + z_2 + \\epsilon_3 \\end{align*}\n\nHence, we've recovered the causal mechanisms $f_1, f_2, f_3$ just from the functional form of $s$.\n\nSubsequently, we can do counterfactual inference. We observe $z=(1,2,4)$, use the inverse solution function $s^{-1}$ to obtain $\\epsilon=(1, 1, 1)$. Then, we do an intervention $do(z_2=0)$. We can model this in terms of an intervened noise space $\\tilde \\epsilon$, for example via an intervened causal mechanism $\\tilde z_2 = \\tilde f_2(\\tilde \\epsilon_2)=\\tilde \\epsilon_2$ and are then interested in the intervention $\\tilde \\epsilon_2 = 0$. For all the non-intervened variables, the causal mechanism and the noise variable are unchanged, so $\\tilde f_1 =f_1,\\tilde f_3=f_3, \\tilde \\epsilon_1=\\epsilon_1, \\tilde\\epsilon_2 = \\epsilon_2$, to get $\\tilde \\epsilon = (1, 0, 1)$.\n\nWe unroll the interventional mechanisms $\\tilde f_1, \\tilde f_2, \\tilde f_3$, to get interventional solution function\n$$\n\\tilde s_{I=2} : \\mathbb{R}^3 \\to \\mathbb{R}^3 : \\tilde \\epsilon \\mapsto \\tilde z = \\begin{pmatrix}1 & 0 & 0 \\\\\\\\ 0 & 1 & 0 \\\\\\\\ 1 & 1 & 1\\end{pmatrix}\\tilde \\epsilon\n$$\nWe evaluate this to get $\\tilde z = \\tilde s_{I=2}(\\tilde \\epsilon)=(1, 0, 2)$, as expected.\n\n### Bigger picture\nIn our ILCM model, we are able to identify the solution function from the weakly supervised data. With our assumptions, in particular the conditional invertibility of the causal mechanisms, the SCM can be recovered from the solution function / reduced form SCM. This allows us to conduct interventions, as shown in our paper in Figures 4 and 5.\n\nDoes this clarify the methodology?",
" I thank the authors for their extensive response. I will update my review and respond to it separately.\n\nFor now, since I still have some questions as to how exactly the solution function is used, I would like to use the remaining discussion time to ask the authors to further elaborate on the following concrete example:\n\nConsider an acyclic Markovian linear SCM with three variables:\n$$z_1:=\\epsilon_1$$\n$$z_2:=z_1+\\epsilon_2$$\n$$z_3:=z_1+z_2+\\epsilon_3$$\n\nSuppose we observe $z=(1,2,4)$. \n\nAbduction yields $\\epsilon=(1,1,1)$.\n\nWe want to reason about the counterfactual under $do(z_2=0)$.\n\nWe update the second equation, substitute $z_2=0$ into the third, and obtain the counterfactual (post-intervention) view $z=(1,0, 2)$. \n\nNow consider carrying out the same calculation from the reduced form SCM:\n$$z_1=\\epsilon_1$$\n$$z_2=\\epsilon_1+\\epsilon_2$$\n$$z_3=2\\epsilon_1+\\epsilon_2+\\epsilon_3$$\n\nSimilarly, abduction on $z=(1,2,4)$ yields $\\epsilon=(1,1,1)$.\n\nNow my main question is: how exactly do you calculate the counterfactual under $do(z_2=0)$ to arrive at the correct solution $z=(1,0, 2)$? \n\nClearly, the naive approach of only replacing the second reduced form equation would have no effect on $z_3$ since $z_2$ (what has been actually changed) does not appear there, but is represented as $\\epsilon_1+\\epsilon_2$ (which are assumed constant for counterfactual reasoning). This is the issue I was referring to with \"the reduced form SCM contains strictly less information\": when used naively (without any modifications) it does not allow modelling interventions on the endogenous variables because their relationships are fully explained in terms of a single mapping from the exogenous variables. \n\n[Fixing this would likely require knowing exactly what the dependence of $z_2$ on $\\epsilon_1$ and $\\epsilon_2$ is; this is what my comment \"does this not require knowledge of the true mechanism?\" referred to---I understand your result only uses information on $p(x,x')$.]\n\nNow based on your response, I understand (partially) that you propose some intermediate step as a fix which recursively constructs an alternative solution function (reduced form SCM) to model the intervention. Could you please explain and showcase this for the above example? I didn't fully get it and feel it might be easiest to get the main idea in a concrete toy example. Thank you!\n\n[Generally, I still feel that there are some subtleties and technical details regarding the use of solution functions/reduced form in place of the full SCM that deserve more attention in the main paper. There must be good reasons why SCMs contain internal structure and are not just expressed as a function from the space of exogenous variables to the space of endogenous variables. I think I either fail to see the simplifying assumptions/steps or the authors have had some profound insights here that are of independent interest; in either case, I think this connection warrants further discussion.]",
" We would like to thank all reviewers again for their helpful comments. We hope that we were able to address all points with our replies and the updated paper.\n\nIf there are any open questions, please let us know. We would be more than glad to discuss them before the end of the discussion period later today.",
" Dear Reviewers! Thank you so much for your time on this paper so far.\n\nThe authors have written a detailed response to your concerns. How does this change your review?\n\nPlease engage with the authors in the way that you would like reviewers to engage your submitted papers: critically and open to changing your mind.\n\nLooking forward to the discussion!\n",
" We thank all the reviewers for their thoughtful comments and constructive feedback. We are encouraged that the reviewers find the problem of causal representation learning relevant (GuNY, UyEP) and appreciate that we prove identifiability in the weakly supervised setting (GuNY, eczw, HAYV). In addition, we provide a novel learning strategy to infer causal structure without explicitly forming graphs, which the reviewers considered one of the strengths of the paper (GuNY, eczw, HAYV). Some reviewers also \"enjoyed reading the paper\" (HAYV) and found it \"well-written and well-motivated\" (GuNY).\n\nThe reviewers also gave some excellent suggestions for improvement. Reviewer eczw has some concerns regarding the scale of the experiments. We attempted to address this by including an additional experiment, in which we train our model on a synthetic task with a varying number of variables. We found that our method scales well to 10 causal variables without any additional tuning. Furthermore, we have improved the writing in a new version to address some concerns the reviewers had.\n\nReviewers eczw and HAYV correctly note that we need to make limiting assumptions to arrive at our theoretical result. However, it is well known that some additional assumptions are necessary for causal identifiability. We believe that we increase the understanding of what kind of assumptions are sufficient, and provide a practical method within our regime. Hence, we think that our paper makes an important contribution to causal representation learning.\n",
" Thank you for the helpful review and the encouraging words. In the following, we would like to address your two main questions and comments.\n\n**Advantage of ILCMs over ELCMs**: It is a good question why exactly ILCMs perform so much better than ELCMs. You are right, both models can initially learn a wrong graph orientation. However, we believe that it is much easier for an implicit LCM to \"unstuck\" itself: the neural solution functions in the ILCM can intermittently take on configurations that do not correspond to a valid DAG, for instance with $e\\_2$ influencing the distribution of $\\tilde{e}\\_1$ and ${e}\\_1$ influencing the distribution of $\\tilde{e}\\_2$. This allows the ILCM to smoothly transition out of a bad graph configuration into the correct graph without driving up any loss terms. In the end, the learned solution functions always correspond to valid DAGs, which can for instance be extracted with the heuristic algorithm we proposed.\n\n**Applicability of assumptions**: We agree that our work is far from applicable to real-life problems and that our identifiability theorem (and the practical implementation) rely on strong assumptions (in particular that of perfect interventions) that will often not be satisfied. We are glad to hear that you share our opinion that our work is nevertheless a useful starting point, and agree that there should be more research into relaxing these assumptions. In Section B of the supplementary material, we attempt to summarize our assumptions as well as some thoughts on which of the assumptions are likely to be relaxed in future work.\n\n**Updated paper**: We have updated our submission with four main changes. In the PDF file, we have highlighted text changes in green.\n\n1. To study whether our method scales to higher-dimensional data spaces, we have updated our CausalCircuit dataset to 512x512 resolution. ILCMs are still able to disentangle causal factors (DCI disentanglement of 0.97) and infer the true causal graph, outperforming the acausal baselines.\n2. To study scaling to more complex causal systems, we have added a new experiment with simple synthetic datasets with between 2 and 20 causal variables and random DAGs. ILCMs scale to around 10 causal variables: in this regime they enable disentanglement scores close to 1 and outperform the baseline in terms of the accuracy of the inferred causal graph. Scaling ILCMs to 15 or more variables will require more research.\n3. In addition to the DCI disentanglement score, we now report the completeness and informativeness scores proposed in Eastwood & Williams, ICLR 2018, which support the same conclusions as the disentanglement score.\n4. We have improved the text in various places.\n\nThank you again for your review and comments.\n",
" Thank you for your review and the detailed comments.\n\n**Assumptions**: We agree that it is hard to verify if the assumptions of our identifiability theory hold in any given dataset. In most real-life datasets, they will likely not hold. We wish that we already had a causal representation learning algorithm that works in realistic settings, but unfortunately, this problem is far from being solved. Other recent works on causal representation learning require similar or stronger assumptions, see for instance CausalVAE (M. Yang et al, CVPR 2021) or CITRIS (P. Lippe et al, ICML 2022). We hope that we are transparent about these limitations, which we discuss in our conclusions and in Section C in the supplementary material.\n\nDespite being still some distance away from realistic use cases, we believe that our work contributes to better understanding what kind of information can make causal structure identifiable. We are, to the best of our knowledge, the first to show even in principle that causal variables and arbitrary causal graphs can be identified from pairs of pre- and post-intervention data. We believe that this result may be valuable to guide further progress in the field.\n\n**Experiments with larger graphs**: Thanks for the feedback on our experiments. We would like to stress that the goal of these experiments was not to study realistic settings, but to verify that our identifiability result translates into a learning algorithm that can under certain assumptions identify causal variables and graphs.\n\nWe added an experiment to study scaling to larger causal graphs. Thank you for pointing out this omission in our original submission and suggesting this experiment. We generate simple synthetic datasets with linear SCMs, random DAGs, and SO(n) decoders with 2 to 20 causal variables. We find that ILCMs scale to around 10 causal variables, visible for instance in these mean disentanglement scores:\n\n| Causal variables | ILCM | dVAE | β-VAE |\n|---:|---:|---:|---:|\n| 8 | 0.99 | 0.36 | 0.25 |\n| 10 | 0.91 | 0.36 | 0.20 |\n| 12 | 0.85 | 0.29 | 0.22 |\n| 15 | 0.58 | 0.25 | 0.21 |\n\nFor more results and error bars see Fig. 6 in the updated paper, for graph inference results Fig. 13 in the updated supplementary material. Scaling ILCMs to systems of 15 variables or more will require more research.\n\n\n**Higher-dimensional data**: We explore the scaling with data dimensionality by updating the CausalCircuit dataset to a resolution of 512x512 pixels. ILCMs are still able to learn the correct causal variables (DCI disentanglement of 0.97) and recover the true causal graph.\n\n**Disentanglement**: We believe there may be a misunderstanding due to the fact that the word \"disentanglement\" is used to mean two different things: (i) that a set of latent variables is in a one-to-one correspondence with a set of ground-truth factors, or (ii) that a set of latent variables follows a distribution such that each pair of latent variables are independent.\n\nThe disentanglement score we report measures (i). While this metric is often used in the setting where variables are disentangled based on property (ii), this is not necessary for the DCI metrics to be meaningful. They can still measure disentanglement (in sense (i)) between correlated variables, like our causal variables in our case; see for instance, Träuble et al, \"On disentangled representations learned from correlated data\" (ICML 2021).\n\n**Questions**:\n\n> For Causal3Dindent dataset, only 6 graphs are considered.\n\nFor 3 nodes, there are 6 unique DAGs (up to a permutation of the nodes). We created one dataset for each of these DAGs. The mapping from high-level concepts to the nodes in these graphs is random, as are the causal mechanisms.\n\n> Does ILCM also suffer from optimisation issues in higher dimensions?\n\nIn our experiments, ILCMs proved robust to train. Unlike ELCMs, ILCMs did not require scanning over multiple random seeds to learn the correct variables and graphs. We attribute this to ILCMs being less prone to local minima in the loss landscape from wrongly oriented graph edges. ILCMs (but not ELCMs) are able to escape from such configurations without incurring a loss penalty because they can intermittently take on non-acyclic configurations.\n\n**Updated paper**: We have updated our submission with four main changes, highlighted in green in the PDF file:\n\n1. To study scaling to higher-dimensional data, we have updated our CausalCircuit dataset to 512x512 resolution.\n2. To study scaling to more complex causal systems, we have added a new experiment with synthetic datasets with between 2 and 20 causal variables.\n3. In addition to disentanglement, we report the completeness and informativeness scores proposed in Eastwood & Williams, ICLR 2018.\n4. We have improved the text in various places.\n\nThank you again for your comments and suggestions. They were very helpful in improving our manuscript.\n",
" Thank you for your review and helpful, constructive feedback. Here we will address your main points.\n\n**Presentation**: We wholeheartedly agree that the space constraints made it impossible to explain the underlying concepts, our theory, and our practical implementation and experiments in a lot of detail. We are determined to use the space we have as well as we can.\n\nThank you for pointing out two omissions where we failed to define important concepts. We have added short explanations of diffeomorphisms and faithfulness in our updated version of the paper and also added a reference that explains common concepts in causality in more depth. We also improved the text in a number of other places and hope that these changes improved readability. We would be grateful if you could point us to any other parts of the paper that are particularly hard to follow.\n\n> What is faithful in the causal domain?\n\nLoosely speaking, a causal model is faithful if there are no accidental independence relations in the data: there are no variables that are causally connected according to the causal graph, but are independent in the data distribution. This is a very common assumption in causal discovery (see e.g. Hyttinen et al., \"Experiment Selection for Causal Discovery\", JMLR 2013), since without faithfulness it is often impossible to identify all edges in the causal graph. \n\n> What is $p_M^X$?\n\nThis is the data distribution in the weakly supervised setting (pre- and post-intervention data) according to an LCM M. We define it in Definition 3.\n\n**Training algorithm**: We have added an explicit algorithm to Section C.2 of the supplementary material that shows the complete training procedure for ILCMs. ELCM training is very similar, the differences are explained in Section E in the supplementary material.\n\n**Accuracy of intervention inference**: We report the accuracy of intervention inference in Table 1 in the \"Acc\" column. We find that our methods are consistently able to classify interventions with an accuracy of over 95\\%. The same is true for the dVAE baseline, but despite the good intervention accuracy, that approach is not able to disentangle the causal factors and learn the causal graph correctly.\n\nIndeed, we find that it is crucial to accurately infer interventions in order to learn disentangled representations and correct causal graphs. In earlier experiments where the intervention inference failed, the model was never able to disentangle the causal variables.\n\nYou raise the interesting question of how to overcome inference errors. We developed a training schedule that improved the quality of intervention inference. Early in training, we just train the noise encoder and decoder as well as the intervention encoder (and set the solution distributions to a uniform probability density that is not trained yet). This substantially stabilizes the training of the intervention encoder, as it avoids contributions to the loss from the randomly initialized solution functions. We describe this schedule in Section C.2 of the supplementary material. \n\n**Updated paper**: All in all, we have updated our submission with four main changes. In the PDF file, we have highlighted text changes in green.\n\n1. To study whether our method scales to higher-dimensional data spaces, we have updated our CausalCircuit dataset to 512x512 resolution. ILCMs are still able to disentangle causal factors (DCI disentanglement of 0.97) and infer the true causal graph, outperforming the acausal baselines.\n2. To study scaling to more complex causal systems, we have added a new experiment with simple synthetic datasets with between 2 and 20 causal variables and random DAGs. ILCMs scale to around 10 causal variables: in this regime they enable disentanglement scores close to 1 and outperform the baseline in terms of the accuracy of the inferred causal graph. Scaling ILCMs to 15 or more variables will require more research.\n3. In addition to the DCI disentanglement score, we now report the completeness and informativeness scores proposed in Eastwood & Williams, ICLR 2018, which support the same conclusions as the disentanglement score.\n4. We have improved the text in various places. Thank you for your helpful suggestions for this.\n\nThank you again for your review and comments, which helped us improve our manuscript.\n",
" **Various questions**: \n\n> Defn. 2: What does “compatible” mean here? What implications does this have for the (obs./int./cf.) distributions implied by the two SCMs, will they match?\n\nLoosely speaking, ``compatible'' means that when you take all components of the first LCM and transform them under a permutation of the variables and elementwise transformations of each variable, you get out the second LCM. We make this statement more precise in Definitions 6, 8, and 9 in the appendix. There we phrase the idea of compatibility mostly through commutation relations like that shown in Eq. (5).\n\n> what if the true post-intervention distribution is non-Gaussian? Why is Gaussianity needed here, or could a more flexible density also be used?\n\nWe only show identifiability of causal variables *up to elementwise reparameterizations of the causal variables* (and we think it is unlikely that a stronger identifiability statement could hold without explicit labels on the causal variables or distributional assumptions). Forcing each causal variable to follow a particular base distribution when intervened upon exactly resolves this ambiguity. We chose a Gaussian distribution for simplicity, but more complex distributions are certainly also possible. Note that any distribution on $\\mathbb{R}$ with a smooth density that is non-zero everywhere is isomorphic to the standard Gaussian. This is a corollary of Lemma 1.\n\n> L. 238: since $q(I|x,x')$ involves $\\mu\\_e(x)$, does this not depend on the noise encoder? Is $\\mu\\_e(x)$ shared between both?\n\nIndeed, the noise mean function $\\mu\\_e(x)$ is shared between both. (Maybe you are wondering why we did not just write $q(I|e,e')$ then. That would be problematic because the distribution of the noise encodings depends on the intervention targets as $e, e' \\sim q(x, x', I)$, and we needed to avoid cyclical dependencies. However, the mean function $\\mu\\_e(x)$ does not depend on I, so we are free to use that in $q(I|x,x')$.)\n\n> L. 296: is the graph of dVAE-E not always the empty graph by construction?\n\nEssentially yes, but there is some fineprint. The prior of the dVAE model is independent between the causal variables. However, it is still possible that the distribution of latents given by the data distribution pushed through the encoder (which can in general be different from the prior, see for instance I. Tolstikhin et al, \"Wasserstein Auto-Encoders\", ICLR 2018) is not independent. That is why ENCO occasionally detects edges in this distribution.\n\n> L. 300: are objects/latents learned by SlotAttention assumed/constructed to be independent?\n\nNo, the slot attention baseline does not assume independence of latents, because it is not a probabilistic model. It is just a feed-forward neural network that is trained on pixel reconstruction loss.\n\n> L.369: any intuition on what the failure mode is here?\n\nWith discrete ground-truth causal variables and continuous latent spaces in the VAE, the encoder is free to collapse the whole dataset into lower-dimensional manifolds in the latent space, even a single line, and to model all data pairs as an intervention along this one line.\n\n\n**Updated paper**: We have updated our submission with four main changes, highlighted in green in the PDF file:\n\n1. To study whether our method scales to higher-dimensional data spaces, we have updated our CausalCircuit dataset to 512x512 resolution. ILCMs are still able to disentangle causal factors (DCI disentanglement of 0.97) and infer the true causal graph, outperforming the acausal baselines.\n2. To study scaling to more complex causal systems, we have added a new experiment with simple synthetic datasets with between 2 and 20 causal variables and random DAGs. ILCMs scale to around 10 causal variables: in this regime they enable disentanglement scores close to 1 and outperform the baseline in terms of the accuracy of the inferred causal graph. Scaling ILCMs to 15 or more variables will require more research.\n3. In addition to the DCI disentanglement score, we now report the completeness and informativeness scores.\n4. We have improved the text in various places. Thank you for your helpful suggestions for this.\n\nThank you again for your review and comments, which were very helpful in improving our submission.\n",
" Thank you for your thorough review and the many helpful suggestions. In this response, we focus on your main points, but take all the suggestions into account in our revised paper.\n\n**Points regarding solution functions**:\n> [The reduced SCM representation] is [...] strictly less expressive in that [it] cannot model arbitrary interventions\n\nDue to our assumption that the causal mechanisms are bijective functions from the noise and that the graph is acyclic, the causal mechanisms can be recovered from the solution. Consider a simple two variable case, with graph $A \\to B$ and mechanisms $z\\_A=f\\_A(\\epsilon\\_A), z\\_B=f\\_B(z\\_A, \\epsilon\\_B)$. Then the solution is $s(\\epsilon\\_A, \\epsilon\\_B)=(f\\_A(\\epsilon\\_A), f\\_B(f\\_A(\\epsilon\\_A), \\epsilon\\_B))$. From this function, we can recover the mechanism: $f\\_A(\\epsilon\\_A)=s(\\epsilon\\_A, ...)\\_A, f\\_B(z\\_A, \\epsilon\\_B)=s(s^{-1}(z\\_A, ...)\\_A, \\epsilon\\_B)\\_B$. Note that the inverse is well-defined, because the $A$ component of $s$ is non-constant only in the first argument $\\epsilon\\_A$, and is a bijective function thereof by assumption. This process requires us to know a topological ordering of the DAG, which can also be recovered from the solution: it is any permutation of rows and columns so that the Jacobian $\\partial s\\_i / \\partial \\epsilon\\_j$ is triangular on all inputs $\\epsilon$. This argument generalizes to any DAG.\n\nSo including the bijectivity assumption, we disagree that the reduced SCM contains less information than the original SCM. \nIn the ILCM, to model any perfect intervention on variable $i$, we encode $x$ to $e$, sample (for stochastic interventions) or set (for hard interventions) $\\tilde z\\_i$, transform with $s_i^{-1}$ to $\\tilde e$ and decode.\n\nWe have update the description of the ICLM in the paper and appendix C.3 in the supplementary material to clarify these points.\n\n> How are intervened solution functions $\\tilde{s}\\_I$ defined?\n\nIn the ELCM, for each intervened mechanism $i \\in I$, we have an intervened causal mechanism $\\tilde f\\_i : \\mathcal{E}\\_i \\to \\mathcal{Z}\\_i$, mapping a noise variable to the stochastic intervened variable independent of its original parents. For any $j \\not \\in I$, we have the original causal mechanism $\\tilde f\\_j=f\\_j$. The solution $\\tilde s\\_I$ then follows from the intervened mechanisms $\\tilde f$ by recursive substitution, just as with the original solution.\n\n> Does this not require knowledge of the true mechanisms?\n\nNo, we prove identifiability from just observing the $p(x, \\tilde x)$.\n\n\n> What aspect of the theory and proposed method would break down, e.g., for hard interventions (which set $z\\_i$ to a constant) or for soft interventions that preserve some dependence on the parents?\n\nAs described above, our learned model can perform perfect, non-stochastic interventions. However, to learn it, we are using unknown interventions sampled from some distribution. If we would train with only a single intervention value for each variable, the intervened distribution $\\tilde z$ would no longer have full support and the solution function not a diffeomorphism and our method would break down.\n\nPlease see appendix B for a discussion on generalization to soft interventions: that case is not identifiable.\n\n> Domain of intervened noise variable\n \nThe domain for each variable is just $\\mathbb{R}$, so those spaces are equal. The subscript $\\tilde {\\mathcal{E}}\\_i$ denotes that this is the distribution over the interventional noise $\\tilde \\epsilon\\_i$.\n\n> footnote 3\n\nWhich footnote do you mean? Our initial version does not contain a footnote 3.\n\n> Proof that only intervened noise variables change\n\nThis is an excellent point, thank you very much. We removed this proof from an earlier version, as we don't rely on it in our main theorem anymore. For the ILCM construction, we still use it. We added it back to appendix C.\n\n\n**Suitability of DCI scores**:\n\nIt is common, but not necessary to use DCI metrics for independent variables. The DCI metrics quantifies how much ground-truth factors and learned latents are in a one-to-one correspondence, but does not make assumptions about the joint distribution of latent and ground-truth factors. It is therefore also suited to measure disentanglement (in the sense of one-to-one correspondence between true factors and learned latents) between correlated variables, like the causal variables in our case.\n\nAs an example, Träuble et al's \"On disentangled representations learned from correlated data\" (ICML 2021) also measures disentanglement with the DCI disentanglement score when the true factors are not independent.\n\nAs for the implementation of the DCI metrics, we use gradient boosted trees (sklearn's implementation with default parameters) to construct the feature importance matrix. We added the completeness C and informativeness I scores to the results in the appendix. C and D give the same conclusions, while I is low for all models. Thank you for the suggestion.",
" ### Problem setting \nThe paper addresses the causal representation learning task, i.e., inferring high-level causal latent variables from low-level observations. Specifically, it considers a weakly-supervised setting in which pairs $(x,x’)$ of pre- and post-intervention observations are available. This can be seen as a generalization of Locatello et al. [5] to a setting in which the latents are not mutually independent but causally related. The paper formalizes this setting via latent causal models (LCMs) which consist of a structural causal model (SCM) over latent variables $z$ and a decoder $g$ mapping $z$ to observations $x$ (as also considered in previous works).\n\n### Theory\nThe main theoretical contribution (Thm. 1) is an identifiability result stating that, given an infinite number of pairs $(x,x’)$ resulting from all perfect stochastic single-node interventions (those that set a causal variable $z_i$ to a new noise variable, thus completely removing any influence from its causal parents), the true LCM is identified up to a permutation and element-wise invertible reparametrisation of the causal variables. \n\n### Algorithm/Method\nBased on this insight, the paper then proceeds to investigate algorithmic approaches to learning such LCMs from data. Specifically, two VAE-based approaches are investigated which either represent the causal variables and graph explicitly or only implicitly (ELCMs and ILCMs, respectively). ELCMs are only briefly discussed at a high level and are dismissed due optimization challenges. ILCMs, in which the latent space corresponds to the exogenous noise variables in the LCM, are portrayed as a more promising practical approach. ILCMs consist of an intervention encoder, a noise encoder and decoder, and a solution function (a map between exogenous noise and endogenous causal variables) and are trained by maximizing the corresponding ELBO. Two methods are proposed for post-hoc extracting causal structure from a trained ILCM. \n\n### Experiments\nIn experiments on three synthetic and image datasets, ILCMs are compared with other VAE architectures and SlotAttention w.r.t. DCI score, accuracy in inferring intervention targets, and structural Hamming distance and are shown to compare favourably. A new Mujoco-based dataset called CausalCircuit is also introduced in the process. ## Post-Rebuttal Update\nAfter discussion with the authors, some of my doubts and questions (particularly regarding the use of solution functions) have been resolved and addressed, and I have decided to increase my score as a result. I believe this is a solid paper that would benefit the NeurIPS community and recommend acceptance. \n\nI encourage the authors to take serious the suggestions brought up during the review phase to further improve the paper, particularly its accessibility and presentation, as they already indicated they would.\n____ \n\n*Disclaimer / review context: I have read the main paper carefully, but only skimmed the Appendix. Due to lack of familiarity with category theory, I did not check the proof of the main theorem and can thus not judge its soundness. I am quite familiar with the setting and related literature, but was not able to follow certain aspects in detail---specifically, the exact use of different solution functions---which I hope to be clarified during the discussion period. My score reflects my current impression of the paper, but I remain open to adjusting my score based on the authors' response.* \n\n### Strengths\n- The paper is generally well-written and well-motivated.\n- The problem setting of learning causal representation from weak supervision is novel and highly relevant.\n- The paper makes contributions to both theory (identifiability result) and algorithms (ELCMs and ILCMs) for causal representation learning.\n- The paper is well positioned in the related literature. \n- The paper is honest and transparent in discussing limitations, both regarding theoretical assumptions and implementation.\n- The paper introduces useful concepts such as isomorphisms between LCMs, which are of interest to the field of causal representation learning beyond the present work. \n\n### Weaknesses\n- Due to addressing both theory and algorithmic approaches, many technical details are deferred to the Appendix, which makes it hard to understand the proposed method in sufficient detail based on just the main text. \n- I have some doubts/reservations regarding the soundness of the proposed ILCMs (specifically, regarding the representation of SCMs through the noise variables and solution functions) and the evalution in terms of the DCI score (see below for more details).\n ### Main questions and comments\nMy main concerns and questions relate (i) to the concept of representing SCMs through the combination of exogenous variables and a solution function (which is also known as the reduced form SCM in the literature), as well as to (ii) the soundness of evaluation based on DCI scores.\n\n(i) To my understanding, this representation of SCMs is not equivalent to the original SCM, but strictly less expressive in that the reduced form SCM cannot model arbitrary interventions to the true causal mechanisms---see, e.g., Sec. 10 of (Schölkopf, B. & von Kügelgen, J. From Statistical to Causal Learning. 2022). Would the authors agree with this statement?\n\nI would like to better understand how the new solution functions $\\tilde{s}_I$ are defined:\n- Does this not require knowledge of the true mechanisms? \n- How does this relate to the assumption of perfect, stochastic interventions? \n- What aspect of the theory and proposed method would break down, e.g., for hard interventions (which set $z_i$ to a constant) or for soft interventions that preserve some dependence on the parents?\n\nSome more specific questions regarding the solution function(s):\n- Before Defn. 1, the new mechanism $\\tilde{f}_i$ is defined over the same exogenous domain $\\mathcal{E}_i$, but in Defn. 3, $\\tilde{\\epsilon}_i$ seems to be a new noise variable with different domain. Can you clarify?\n- Regarding footnote 3, do you have a formal argument or reference to back up this claim? \n- In Sec. 4.2, paragraph “Latents”, the noise variables are defined as $e=s^{-1}(z)$ and $\\tilde{e}=s^{-1}(\\tilde{z})$ and it is correctly stated that the latter corresponds to noise value that would have generated $\\tilde{z}$ under the *unintervened* SCM mechanisms. In the next paragraph, it is then stated that only those components of $\\epsilon$ corresponding to intervened nodes change. Where exactly in Appendix A is this proven? While I found this result surprising at first, after looking at some toy examples, it seems to hold. However, the way in which $\\epsilon_I$ needs to change will depend on the value of the other noise terms $\\epsilon_{-I}$, thus rendering them dependent. Could you comment on the relevance of this? \n\n(ii) Based on my understanding, the DCI score of Eastwood and Williams is tailored to a setting with mutually independent ground truth factors since it is based on predicting different ground truth latents from the learnt representation. It would seem that this is no longer necessarily a sound evaluation when the $z_i$ are dependent.\n- Did you consider this aspect? \n- What was your choice for the feature importance matrix?\n- Do you report only the disentanglement (D) score or the whole DCI score (average of D, completeness C and informativeness I)? Since the abbreviation D was used, this was not completely clear. \n\n### Other more minor comments, questions, and suggestions\n- l.4: “this requires…” makes it sound like a necessary condition, whereas what is shown is that the considered setting is sufficient. Consider rewording, e.g., “this involves…” \n- the restriction to perfect stochastic interventions seems important enough to be mentioned in the abstract and introduction\n- Fig. 1: consider adding an explanation of what orange nodes represent (changed post-intervention values?)\n- Related work: I would consider adding the following causal representation learning references:\n- - Adams, J., Hansen, N., & Zhang, K. Identification of partially observed linear causal models: Graphical conditions for the non-gaussian and heterogeneous cases. Advances in Neural Information Processing Systems 34, 2021.\n- - Xie, F., Cai, R., Huang, B., Glymour, C., Hao, Z., & Zhang, K. Generalized independent noise condition for estimating latent variable causal graphs. Advances in Neural Information Processing Systems 33, 2020.\n- - Kivva, B., Rajendran, G., Ravikumar, P., & Aragam, B. Learning latent causal graphs via mixture oracles. Advances in Neural Information Processing Systems 34, 2021.\n- - Chalupka, K., Perona, P., & Eberhardt, F. Visual causal feature learning. Uncertainty in Artificial Intelligence, 2015.\n- - Beckers, S., & Halpern, J. Y. Abstracting causal models. AAAI Conference on Artificial Intelligence, 2019.\n- l.104: including a intervention distribution in defining SCMs relates to (Rubenstein, P., Weichwald, S., Bongers, S., Mooij, J., Janzing, D., Grosse-Wentrup, M., & Schölkopf, B. Causal Consistency of Structural Equation Models. Uncertainty in Artificial Intelligence, 2017.)\n- Defn. 2: What does “compatible” mean here? What implications does this have for the (obs./int./cf.) distributions implied by the two SCMs, will they match?\n- Thm. 1: consider adding “in the sense of Defn. 2” to statement 2.\n- l. 151: AFAIK, Pearl defines hard interventions as those which set a variable to a constant; in this sense, the considered stochastic interventions are not hard (in general)\n- Sec. 4.1: some more details on ELCMs (e.g., form of the prior $p(z,z’)$) would be helpful\n- Sec. 4.2: the idea of ILCMs where structure is embedded implicitly seems related to: Leeb, F., Lanzillotta, G., Annadani, Y., Besserve, M., Bauer, S., & Schölkopf, B. Structure by architecture: Disentangled representations without regularization. 2020.\n- l. 214: what if the true post-intervention distribution is non-Gaussian? Why is Gaussianity needed here, or could a more flexible density also be used? I understand this is only the base density used to encode a more complex distribution over the noise variable, but this seems somewhat counterintuitive: do we not mostly care about the distribution of causal variables in the end? In this sense, forcing it to be Gaussian seems restrictive. \n- L. 215 ff.: I was not able to really follow this and the next paragraph in satisfying detail; perhaps consider expanding/clarifying.\n- L. 238: since $q(I|x, x’)$ involves $\\mu_e$, does this not depend on the noise encoder? Is $\\mu_e$ shared between both?\n- L. 296: is the graph of dVAE-E not always the empty graph by construction? \n- L. 300: are objects/latents learned by SlotAttention assumed/constructed to be independent? \n- Evaluation: it could be interesting to also evaluate how well different approaches capture the true interventional distribution, e.g., by looking at an average KL between different true and inferred interventional distributions. \t\n- Fig.5: which ground truth variables are intervened upon here? Also, some more details on how the ILCM intervention is generated exactly would be helpful.\n- L.369: any intuition on what the failure mode is here? \n- (L.378) Generally, I think the pre- and post-intervention terminology is slightly misleading: Typically, interventions are associated with layer 2 queries in which the exogenous variables are not shared but randomly resampled. In the considered setting, the exogenous variables are shared which makes it a layer 3 / counterfactual query. This point has been made in a similar multi-view/weakly-supervised setting by von Kügelgen et al. [14, Sec. 3] and I think this could be articulated more clearly throughout. In particular, “pre-“ and “post-“ suggest a temporal succession, which is not the same as the considered *hypothetical* intervention under the same context/background condition/noise values; the analogy to agents observing the effect of actions may thus only hold approximately, as there some noise variables may change (naturally, i.e., not as the result of the intervention). N/A",
" This paper aims to identify the latent causal models from the observational data, where data can be intervened by some pre-defined distributions. To achieve this goal, the authors firstly demonstrate that if the observation data is generated according to equation 1, then the causal model can be identified. Based on this theory, the authors design two specific models to learn the causal variable and causal structure. The first model is based on VAE and the second one infers the intervention from the noisy variable. In addition, the authors also discuss the potential functions of their proposed models. In the experiments, the authors demonstrate the effectiveness of their model on the tasks of pedagogy, Causal3DIdent and CausalCircuit. \nIn general, I believe the studied problem is interesting. However, I find the paper quite hard to read. There are so many contents referred to the appendix or other papers. For example, the formal definition of diffeomorphic function, at least, there should be some intuitive explanations. What is faithful in the causal domain? What is p_M^X? Ideally, the reader should clearly understand what the authors would like to deliver. However, I think there are a lot of concepts that should not be regarded as common knowledge (it is not necessary for the reviewers to read the appendix). Because of these unclear notations, I have trouble in reading this paper. \n\nThere should be an explicit algorithm on how to train the ELCM and ILCMs. I cannot capture how to relate the solution of ELCM and ILCMs with the theoretical results in section 3. Whether the results learned from ELCM and ILCMs can lead to a optimal result of theory 1?\n\nI'm wondering how accurate are the inferred interventions, and how the error of the inference influence the following causal prediction? Whether there are some strategies to overcome the inference error?\n\nConsidering that the paper seems to study a very interesting problem, I tend to be slightly positive. However, I can lower my rating if the other reviewers do not support this paper.\n See the Strengths And Weaknesses The authors have discussed the limitations of their model. ",
" This paper has contribution in two parts: Identifiability of causal models of high level (latent) variables from low-level data like that of pixels of images based on weak supervision, and two practical algorithms to learn causal models based on the weak supervision. The identifiability result is based on the assumption that pairs of datum where one of them is not intervened upon and the other is, after an atomic hard intervention to one of the high level variables, is available for training. The latent variables need to be continuous for the identifiability result to hold. \nIn addition, there are two algorithms, Explicit Latent Causal Models which are based on explicit modelling of causal graphs as well as Implicit Latent Causal Models which do not model the causal graph directly but rather parameterise the noise encodings. Experiments are performed on very simple datasets to demonstrate the practical algorithms. **Strengths**:\n\nThe identifiability result follows from the work of Locatello et al. 2020 where the identifiability of disentangled representations are presented. The authors in this paper do a good job of extending it to incorporate the notion of graphs in latent variable models and hence it enables the identifiability of arbitrary causal models from unknown interventions.\n\nIn addition, the idea of implicitly parameterising the causal model and then using the latent variable data to learn a causal model from an off-the-shelf algorithm is very interesting as this is something I believe has not been done before in causal representation learning.\n\n**Weaknesses**:\n\nThe identifiability result is based on some assumptions which are hard to verify. For any given dataset of samples, it is hard to verify, let alone know, if the data samples were produced from a single hard intervention on one of its latent variables or if there was a soft intervention or a multi-target intervention. So in this regard, I am a bit unsure in which real world image data does these assumptions hold. The presented results are on very simple and hand-designed datasets where the assumptions are made to hold. Also see below in limitations regarding other assumptions which are not addressed.\n\nThe main weakness of the paper is that the experiments are very weak, to say the least. The datasets are already simple in the sense that all the assumptions of the identifiability result are respected, and moreover, the number of high level variables on which the causal graph is defined is utmost 4, which is a very small number. For reference, for up to 4 variables, all the DAGs can even be enumerated and is a much simpler task. It is disappointing to see that even the implicit LCM algorithm which does not model a causal graph explicitly is not shown to work on more number of variables (10 or 20 for example). For the ELCM, the authors mention that it takes a performance hit when the number of variables increases due to optimisation issues. If it is the same case with ILCM as well, then I would argue that the presented algorithms are far from being practical. This could be a case of identifiability does not imply learnability, unless the authors convincingly demonstrate in higher dimensions that this works reasonably.\n\nAdding to the above point, I suggest the authors to consider a synthetic dataset (maybe with a random nonlinear SCM plus normalizing flow like in the pedagogical experiment) where they can create data samples such that number of high level variables are 10 (or more). Multiple such random datasets could be produced and trained on with an ILCM. This would at least highlight how the ILCM performs in higher dimensions. Given that ENCO can handle variables upto 1000 variables, I do not see any limitations in performing these experiments.\n\nThe proposed algorithm achieves both high disentanglement scores as well as SHD. But this seems a bit counterintuitive. Disentanglement measures how much each of the factors of variation are disentangled from each other while SHD measures how well the ground truth graph is recovered. If the ground truth graph is not extremely sparse, then both cannot be high. Maybe this is only a behaviour in low dimensions which the authors consider. In addition, it could be that DCI score maybe captures something similar to a structural metric. I personally do not think that it is important to measure disentanglement if the focus is to get the correct graph. If it is still important for some tasks, I encourage the authors to measure other disentanglement metrics as well.\n\nIn summary, I feel that the theoretical contribution is very interesting and can be a stepping stone for identifiability results with more relaxed assumptions. However, given that the experiments are weak and the practical algorithms need more realistic settings, I think that more work needs to be done in this aspect before the paper can be accepted. Here are some of the questions I have for the authors:\n\n- For Causal3Dindent dataset, only 6 graphs are considered, why not more? And are the considered six graphs random? Also please give the confidence intervals.\n\n- I do not get the intuition behind using a beta term for the ELBO. Is it not the case that the goal of the current setup is to get the correct causal graph? Encouraging disentanglement through the beta term might be encouraging more independent factors, however, that might not be necessarily the case with hierarchical latent spaces with causal graphs.\n\n- Does the ILCM also suffer from optimisation issues in higher dimensions, and if so, what advantages then practically speaking does it offer as against ELCM.\n\n One of the main limitations of the proposed approach is that the causal variables are assumed to be causally sufficient (i.e. they are apriori known or given in some way through slots etc and they alone define a causal model). However, this assumption is notoriously hard to be true in any realistic dataset. For any image, the cause of a particular variable could be outside the image itself, and hence hidden. This point should be discussed and highlighted. In my opinion, a true practical algorithm should be able to handle latent variables in causal models as well. Nevertheless, I feel that this aspect should be discussed as it is an important aspect of causal representation learning.",
" This paper provides identifiability results for causal discovery under a weakly supervised setting where we have access to data generated from perfect atomic interventions. Along with the theoretical results, it also proposes two algorithms for learning, one requiring an explicit DAG search and the other can learn the graph implicitly.\n This is a nice result and I enjoyed reading the paper! The strength of the paper are:\n\n- The theoretical result is novel and allows identifiability of causal graphs with interventional data. This is important and future algorithms can be made based on this.\n\n- The ILCM description is nice as it allows learning of the graph without explicit graph searching.\n\n- The experiments are non trivial and shows off the implication of the result.\n\n- The paper is well presented and written clearly. It was an enjoyable read.\n\n As it stands, I think it is a very good paper and it should be accepted. A few questions and (minor) criticisms:\n\n- This is the question most directly related to the current paper - I didn't find it clear in the paper as to why ILCM should perform better than ELCM. I can see that ILCM doesn't require explicit graph search which is nice, but can it not also be stuck in local minima just like ELCMs do, if it has found an (implicit) incorrect graph? Perhaps you could clarify this.\n\n- The situation where we have interventional data from perfect and atomic interventions is not very applicable. Although the current work is a good starting point, further investigation into how to relax the interventional data assumption is needed. The authors should be commended for calling out the limitation of their assumptions. I am quite interested in understanding what happens to the identifiability results if we still have data from heterogeneous regimes, but possibly from imperfect/non-atomic interventions? Can the results be extended? In the paper, the authors give a counter example for when the intervention is imperfect, but what about the case when we don't have atomic interventions?"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"pUgHe97mVR",
"JqsQLMPimE3",
"JqsQLMPimE3",
"Pg84bQjpf81",
"T_W7mNLNzG",
"dAypEhMKfu1",
"nips_2022_dz79MhQXWvg",
"nips_2022_dz79MhQXWvg",
"wevYiyJVQ0L",
"G3svjvZ3WVj",
"Sk_Erlibqp",
"8iSQB5Cpo9",
"xMhNmUv4C4v",
"nips_2022_dz79MhQXWvg",
"nips_2022_dz79MhQXWvg",
"nips_2022_dz79MhQXWvg",
"nips_2022_dz79MhQXWvg"
] |
nips_2022_tro0_OqIVde | HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions | Recent progress in vision Transformers exhibits great success in various tasks driven by the new spatial modeling mechanism based on dot-product self-attention. In this paper, we show that the key ingredients behind the vision Transformers, namely input-adaptive, long-range and high-order spatial interactions, can also be efficiently implemented with a convolution-based framework. We present the Recursive Gated Convolution ($\textit{g}^\textit{n}$Conv) that performs high-order spatial interactions with gated convolutions and recursive designs. The new operation is highly flexible and customizable, which is compatible with various variants of convolution and extends the two-order interactions in self-attention to arbitrary orders without introducing significant extra computation. $\textit{g}^\textit{n}$Conv can serve as a plug-and-play module to improve various vision Transformers and convolution-based models. Based on the operation, we construct a new family of generic vision backbones named HorNet. Extensive experiments on ImageNet classification, COCO object detection and ADE20K semantic segmentation show HorNet outperform Swin Transformers and ConvNeXt by a significant margin with similar overall architecture and training configurations. HorNet also shows favorable scalability to more training data and larger model sizes. Apart from the effectiveness in visual encoders, we also show $\textit{g}^\textit{n}$Conv can be applied to task-specific decoders and consistently improve dense prediction performance with less computation. Our results demonstrate that $\textit{g}^\textit{n}$Conv can be a new basic module for visual modeling that effectively combines the merits of both vision Transformers and CNNs. Code is available at https://github.com/raoyongming/HorNet. | Accept |
This paper introduces a new operation **gnConv** and a computer vision network architecture **HorNet**. Motivated by the success philosophy of vision Transformers, the key idea of gnConv is to build a recursive form of gated convolution. It make the module input-adaptive and with long-range and high-order spatial interactions. Consistent improvement are shown over Swin and ConvNeXt on well-established CV benchmarks such as image classification on ImageNet, semantic segmentation on ADE20K and object detection on COCO.
The paper receives receives unanimous accept from all reviewers (Reviewer 3Snz champions the paper with rating score 8), leading to an ``Accept'' decision. | val | [
"7xxFoo3ZAV1",
"3anNWkWmFeDh",
"j9YaO4udujL",
"9nq_zcPB3U",
"amll9CVcER-",
"X_wvB6sOvGa",
"MMb6cU_t0Xu",
"MzZTpdYb24",
"wpEacKUrgqH",
"V_EoWH4iJXu",
"IfJzWYf_Uuw",
"-IuYWTa3tPR",
"K0XufsrXDBo",
"IX5190b0vH_"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing my concerns. I have increased the rating to 5.",
" Thanks a lot for checking our response and providing valuable feedback.\n\nHigh-order spatial interaction is the key concept introduced in our paper. Previous work on Transformer-like architectures usually explores the long-term and input-adaptive weights in the self-attention mechanism. In this paper, we find the high-order spatial interactions in the self-attention mechanism are also critical to the expressive power of Transformers. \nFor vision Transformers, the high-order spatial interactions in our paper represent the two-order interactions among $\\mathbf{q}$, $\\mathbf{k}$, and $\\mathbf{v}$. In each self-attention layer, the input feature $\\textbf{x}$ is first transformed into three versions $\\mathbf{q}$, $\\mathbf{k}$ and $\\mathbf{v}$ via linear projections. The attention weight of position $i$ is computed by $\\mathbf{a}_i=\\mathbf{q}_i^{\\top}[\\mathbf{k}_1,\\ldots,\\mathbf{k}_n]$, \nwhich is the first spatial interaction since the relations between $\\mathbf{q}_i$ and $[\\mathbf{k}_j, j=1,\\ldots,n]$ from all $n$ spatial locations are considered. The attention weights then interact with $\\mathbf{v}$ to obtain the final output: $\\mathbf{x}_i = \\sum_j\\hat{\\mathbf{a}}_i\\mathbf{v}_j$, which is the second spatial interactions since the relations of $\\mathbf{a}$ and $\\mathbf{v}_j$ from all spatial locations are considered. Since the second interaction depends on the first one, we regard the whole process as a two-order interaction among different spatial locations. \n\nAs shown in Figure 1, we extend the above analysis to other popular operations in deep vision models. It is worth noting that while there exists complex and often high-order interactions between two spatial locations in a deep model due to the non-linearity, we find that the **explicit** and **high-order spatial interactions** introduced by the architectural designs are beneficial to improving the modeling power of vision models. Therefore, we further explore the concept of high-order interactions and propose our recursive gated convolutions which can accomplish arbitrary-order spatial interactions with a highly efficient implementation. Our ablation study in Table 4(a) also shows that the performance of models can be largely improved when we introduce high-order spatial interactions. We have also provided a more formal analysis of high-order interactions in line 179-183 and Appendix B.\n\nWe hope our response can address your concerns. Please let us know if you have further questions on this issue.\n",
" Thanks for the response, which solves my questions well. I still need precision on motivation in order to adjust my final rate.\n\n\"We identify the three key ingredients behind the success of recent vision Transformer models: input-adaptive, long-range, and high-order spatial interactions.\" How to understand \"high-order spatial interactions\" here? Could I request a more concrete explanation?\nThank you",
" Dear reviewer 3Snz,\n\nDoes our response address your concerns? Please feel free to let us know if you have any further questions.\n\nBest wishes!",
" Dear reviewer j8qU,\n\nDoes our response address your concerns? Please feel free to let us know if you have any further questions.\n\nBest wishes!",
" Dear reviewer dnYU,\n\nDoes our response address your concerns? Please feel free to let us know if you have any further questions.\n\nBest wishes!",
" Dear reviewers and area chair,\n\nFirst, we would like to thank you all for your time and insightful comments on our paper. We are encouraged to hear that the reviewers think our design is interesting and contributes much to the performance (Reviewer dnYU), our architecture is novel (Reviewer j8qU), simple and effective (Reviewer 3Snz), the writing is good and clear (all reviewers), our experiments are extensive (Reviewer dnYU), carefully designed (Reviewer j8qU), thorough and solid (Reviewer 3Snz). \n\nWe also appreciate their suggestions to improve our work. We notice both reviewers j8qU and 3Snz raise concerns about the speed of our models. We provide a throughput analysis in our rebuttal. Although our models are slower than ConvNeXt series, we find HorNet can generally achieve similar or slightly fast speeds than various vision Transformers. Notably, as shown in Figure 3(c), the higher classification accuracy helps our models achieve better speed-accuracy trade-offs than ConvNeXt and prevalent vision Transformers. Therefore, our models still exhibit very competitive complexity-accuracy trade-offs with these recent state-of-the-art methods.\n\nSince our experiments are designed to verify the superior of our new operation over previous basic operations like plain convolution and self-attention, we directly adopt the overall architectures and training configurations of ConvNeXt and Swin Transformers. As mentioned by reviewer j8qU, this experiment setup leads to lower performance on ImageNet-1K compared to some recent methods. In our response, we list several possible directions to further improve our models. Besides, we also show that HorNet can achieve state-of-the-art level performance on downstream object detection and semantic segmentation by directly adopting several recent dense prediction frameworks thanks to our simple architecture.\n\nWe also include discussions on the related papers suggested by the reviewers. Due to the page limit, we include the additional analysis and discussions in the appendix, where we highlight the modified parts in blue. \n\nWe thank you again for your time and feedback. We hope our response can address your concerns.\n",
" We sincerely thank the reviewer for the positive comments on our work! We address the questions and clarify the issues accordingly as described below.\n\n>**Q1: About the throughput of HorNet**\n\n**[Reply]** Thanks for your suggestion. Here we provide the comparisons of the throughput of HorNet and ConvNeXt, along with the FLOPs and the top-1 accuracy on ImageNet.\n\n|Model | GFLOPs| Throughput (images / s) | Acc. (%)|\n|------|-------|--------|-------|\n|ConvNeXt-T| 4.5 | 1010.3 |82.1 |\n|HorNet-T$_{7\\times 7}$| 4.0 | 845.7 |82.7 | \n|ConvNeXt-S| 8.7 | 621.5 |83.1 |\n|HorNet-S$_{7\\times 7}$| 8.8 | 525.8 |83.8 | \n|ConvNeXt-B| 15.4 | 440.8 |83.8 |\n|HorNet-B$_{7\\times 7}$| 15.6 | 410.0 |84.2 | \n\nAs can be seen in the above table, HorNet runs about 15% slower than ConvNeXt for tiny and small models. For base models, the gap is reduced to about 7%. We admit that at current time we have not found a more hardware-friendly implementation of the $\\textit{g}^\\textit{n}\\text{Conv}$. However, we note that our HorNet series still enjoy better speed-accuracy trade-offs than ConvNeXt series, as shown in Fig. 3(c). We think that with more careful hardware-friendly designs, our $\\textit{g}^\\textit{n}\\text{Conv}$ can serve as a general and efficient operation that can be used as a drop-in replacement of self-attention to boost the performance of various vision backbones.\n\n>**Q2: About the comparisons with better performance on ImageNet.**\n\n**[Reply]** Thanks for your insightful question. The main goal of our experiments is to verify the effectiveness of our new operation. Therefore, we focus on comparing our models with typical ConvNets and vision Transformers using similar architectures and training configurations. Due to the limit of computational resources, we cannot conduct very large-scale experiments like CoAtNet (billions of parameters models or training dataset larger than ImageNet-22K). Except for scaling up the model size and training data, we think the performance of our models can be further improved from the following perspectives: 1) more optimized overall architectures (optimized depth/width for each stage), 2) better patch embedding strategies (overlapping convolutional layers for input embedding and downsampling), 3) more advanced training methods, 4) more efficient ways to produce adaptive weights (using downsmapled features to produce attention weights like MViT), 5) hybrid architectures (combining $\\textit{g}^\\textit{n}\\text{Conv}$ with self-attention and plain convolutions). We have added some discussions on how to further improve our models in the revised appendix (supplementary material). Besides, as mentioned in our reply to Reviewer j8qU, we also show that HorNet can achieve state-of-the-art level performance on downstream object detection and semantic segmentation in the setting of not using further inference tricks or extra pre-training data.\n\n>**Q3: About more discussions on previous works.**\n\n**[Reply]** Thanks for pointing out this. We notice that the mentioned paper SORT also uses element-wise multiplication to introduce second-order interactions, and our $\\textit{g}^\\textit{n}\\text{Conv}$ shares the similar idea. The main difference between $\\textit{g}^\\textit{n}\\text{Conv}$ and SORT is that $\\textit{g}^\\textit{n}\\text{Conv}$ is more extendable to achieve higher order spatial interactions under a controllable computational budget. We have included more discussions about the previous works on the higher-order interaction, as well as the gating mechanisms in LSTM in our revised version. \n",
" > **Q2: Throughput of each method**\n\n**[Reply]** Thanks very much for this suggestion. We have compared the latency on GPU with the two main baseline methods in Figure 3(c). We agree that the multiple small matrix multiplications introduced by our method will affect the speed of our method on GPU. We also observed that our method is slower than ConvNeXt by 7%~15% with similar FLOPs. Meanwhile, thanks to the highly efficient depth-wise convolutions implementation of CuDNN, we also see that our models achieve similar or slightly faster speeds than typical vision Transformers with similar FLOPs. Notably, as shown in Figure 3(c), the higher classification accuracy helps our models achieve better speed-accuracy trade-offs than ConvNeXt and Swin Transformers. Therefore, we believe the speed of our method is still competitive with these recent models. We provide the detailed throughput statistics in the following table. Apart from ConvNeXt and Swin Transformers, we also include more powerful MViTv2-T/S/B models as suggested by the reviewer (since the other two models are not publicly available, we cannot measure their throughput). We have added these results in the revised appendix (supplementary material), where we highlight the modified parts in blue.\n\n\n|Model | GFLOPs| Throughput (images / s) | Acc. (%)|\n|------|-------|--------|-------|\n|ConvNeXt-T| 4.5 | 1010.3 |82.1 |\n|Swin-T| 4.5 | 832.2 |81.3 |\n|MViTv2-T| 4.7 | 728.4 |82.3 |\n|HorNet-T$_{7\\times 7}$| 4.0 | 845.7 |82.7 | \n|ConvNeXt-S| 8.7 | 621.5 |83.1 |\n|Swin-S| 8.7 | 520.7 |83.0 |\n|MViTv2-S| 7.0 | 531.5 |83.6 |\n|HorNet-S$_{7\\times 7}$| 8.8 | 525.8 |83.8 | \n|ConvNeXt-B| 15.4 | 440.8 |83.8 |\n|Swin-B| 15.4 | 364.8 |83.5 |\n|MViTv2-B| 10.2 | 369.1 |84.4 |\n|HorNet-B$_{7\\times 7}$| 15.6 | 410.0 |84.2 | \n\n\n>**Q3: Theoretical analysis of high-order interactions**\n\n**[Reply]** Thanks very much for this suggestion. We agree that theoretical analysis of the proposed high-order interaction mechanism is helpful to better understand our model. But to be honest, currently, we could not find a good theorem to explain such interaction mechanisms in deep networks, since theoretically analyzing a complex system like HorNet or $\\textit{g}^\\textit{n}\\text{Conv}$ is very difficult. To the best of our knowledge, there is also no theorem to thoroughly analyze the effectiveness of the prevalent self-attention mechanism. Therefore, we would like to leave this as future work. In our paper, we have some empirical and intuitive analyses to show the effectiveness of the high-order interaction mechanism in Section 3.1 and Appendix B. We hope these explanations can help readers to better understand our motivation and provide some guidance to design better architectures in future research.\n",
" We sincerely thank the reviewer for the detailed comments and insightful advice. We address the questions and clarify the issues accordingly as described below.\n\n>**Q1: About the experimental results and the comparisons with recent methods**\n\n**[Reply]** Thanks for your detailed suggestions. We agree that our models cannot outperform some recent methods like Dynamic Group Transformer [r1] and Pale Transformer [r3]. However, it is worth noting that the goal of our experiments is not to achieve state-of-the-art performance on ImageNet-1K, but to demonstrate the effectiveness of the new basic operation and the new family of architectures (HorNet). We also didn’t claim our models can state-of-the-art performance on ImageNet.\n\n- **i. Our main contribution is the new basic operation instead of a visual recognition system that can achieve state-of-the-art performance.** Our experiments are designed to clearly verify the superior of our design over previous basic operations like plain convolution and self-attention. Therefore, we choose to strictly follow the basic architecture and the training configuration of widely used architectures Swin Transformers and ConvNeXt (See our descriptions in Section 3.2). Our goal is to provide a new and useful basic operation for future research, instead of developing a state-of-the-art visual recognition system. We believe both directions are very important for deep learning and the computer vision community. Note that many very impactful methods like Swin Transformers and ConvNeXt also didn’t achieve the best performance on ImageNet-1K.\n\n- **ii. Our performance on ImageNet-1K can be further improved if more advanced and complex designs are adopted.** As mentioned above, we directly adopt the widely used architectures in our experiments. Therefore, there is still substantial room to further improve the performance on ImageNet-1K. We think many techniques that have been used in previous work can be useful, including further optimized overall architectures (optimized depth/width for each stage), better patch embedding strategies (overlapping convolutional layers for input embedding and downsampling), more efficient ways to produce adaptive weights (using downsampled features to produce attention weights like MViT), more advanced training methods and hybrid architectures (combining $\\textit{g}^\\textit{n}\\text{Conv}$ with self-attention and plain convolutions).\n\n- **iii. Our simple architectures based on Swin Transformers are easy to combine with state-of-the-art frameworks on downstream tasks.** Since most recent state-of-the-art object detection and semantic segmentation frameworks are designed and tuned on Swin Transformers, we can directly apply these methods to HorNet without further tuning. To further show the potential of our model on downstream tasks, we apply our HorNet-L model to recent high-performance dense prediction frameworks including HTC++ [A], DINO [B], and Mask2Former [C]. The results are listed in the following table. We see our method can achieve state-of-the-art level performance on COCO and ADE20K in the setting of not using further inference tricks (e.g., TTA for object detection) or extra pre-training data (e.g., Object365 [D] pre-training for COCO or COCO-stuff [E] pre-training for ADE20K). \n\nObject detection results:\n|Model| Framework| mAP$^{box}$ | mAP$^{mask}$ |\n|------|-------|--------|-------|\n|Swin-L | HTC++| 57.1 | 49.5|\n|ViT-Adapter-L [F] | HTC++ | 57.9 | 50.2 |\n| HorNet-L | HTC++ | **58.1** | **50.5** |\n|Swin-L | DINO | 57.6 | - |\n|HorNet-L | DINO | **59.2** | - |\n\nSemantic segmentation results:\n|Model| Framework| mIoU$^{ss}$ | mIoU$^{ms}$ |\n|------|-------|--------|-------|\n|Swin-L | Mask2Former | 56.1 | 57.3 |\n|Swin-L | Mask2Former |**57.5** | **57.9**|\n\nWe will include discussions about these suggested papers and include some discussions about how to achieve better performance on ImageNet-1K with $\\textit{g}^\\textit{n}\\text{Conv}$ in the revised paper. We will also add the details and results based on state-of-the-art dense prediction frameworks. Due to the page limit, we have included these new results and discussions in the revised appendix (supplementary material), where we highlight the modified parts in blue.\n\n[A] Hybrid task cascade for instance segmentation, CVPR 2019\n\n[B] DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection, arXiv:2203.03605\n\n[C] Masked-attention mask transformer for universal image segmentation, CVPR 2022\n\n[D] Objects365: A Large-scale, High-quality Dataset for Object Detection, ICCV 2019\n\n[E] COCO-Stuff: Thing and Stuff Classes in Context, CVPR 2018\n\n[F] Vision Transformer Adapter for Dense Predictions, ECCV 2022\n\n",
" We sincerely thank the reviewer for the detailed comments and advice. We address the questions and clarify the issues accordingly as described below.\n\n>**Q1: About the motivation of high-order interactions**\n\n**[Reply]** Our motivation originates from the observation of the recent success of vision Transformers. We identify the three key ingredients behind the success of recent vision Transformer models: input-adaptive, long-range, and high-order spatial interactions. While previous work borrows new designs including large kernels and input-adaptive weights, we demonstrate that the explicit 2-order spatial interactions achieved by the self-attention operation (as shown in Fig. 1) are beneficial for vision models. Therefore, it is natural to investigate whether higher-order interaction can further enhance the modeling capacity of vision models.\n\n>**Q2: About the spatial-wise interaction**\n\n**[Reply]** Our high-order interaction operator ($\\textit{g}^\\textit{n}\\text{Conv}$) is designed to be spatial-wise instead of element-wise. Although $\\textit{g}^\\textit{n}\\text{Conv}$ is built with only depth-wise convolution, linear projection and element-wise multiplication, we have shown that $\\textit{g}^\\textit{n}\\text{Conv}$ can indeed achieve high-order spatial interactions efficiently. The recursive formula of $\\textit{g}^\\textit{n}\\text{Conv}$ is $p_{k+1}=f_k(q_k)\\odot g_k(p_k)$ (see Equ. (3.3)), \nwhere $f_k$ is a depth-wise convolution and $g_k$ is a linear projection or identity mapping.\nTherefore, we have\n$$\np_{k+1}^{(i, c)}=\\sum_{j\\in\\Omega_i} w_{i\\to j}^c q_k^{(j,c)}g_k^{(i, c)},\n$$\nwhere $\\Omega_i$ denotes the receptive field of $f_k$ and we show that the feature at spatial location $i$ explicitly interacts with another feature at spatial location $j$. As a result, each recursive step will increase the order of spatial interaction and the final output of $\\textit{g}^\\textit{n}\\text{Conv}$ considers $n$-order spatial interactions. \n\nTo sum up, the depth-wise convolution aggregates features from different spatial locations while the element-wise multiplication helps to introduce explicit interaction. To better understand why our operation introduces spatial interactions rather than channel-wise interactions, please also refer to the analysis in Equ. (3.7) and (3.8), where we show our $\\textit{g}^\\textit{n}\\text{Conv}$ can accomplish the goal of input-adaptive spatial mixing like self-attention.\n\n>**Q3: About the complexity analysis of different $n$**\n\n**[Reply]** Thanks for your suggestion. The complexity of our $\\textit{g}^\\textit{n}\\text{Conv}$ has an upper bound ${\\rm FLOPs}(\\textit{g}^\\textit{n}\\text{Conv}) < HWC(2K^2 + 11/3\\times C + 2)$ (see Equ. (3.6)), thanks to the design of channel dimension for each order \n$$\n C_k = \\frac{C}{2^{n-k-1}}, \\qquad 0\\le k\\le n-1.\n$$\nWe have also provided a closed form of the complexity in Appendix A:\n\\begin{equation}\n {\\rm FLOPs}(\\textit{g}^\\textit{n}\\text{Conv}) = HWC\\left[2K^2\\left(1 - \\frac{1}{2^n}\\right) + \\left(\\frac{11}{3} - \\frac{2}{3\\times 4^{n-1}}\\right)C + 2 - \\frac{1}{2^{n-1}}\\right],\n\\end{equation}\nwhere one can easily derive the upper bound of the complexity of our $\\textit{g}^\\textit{n}\\text{Conv}$.\n",
" The paper presents the Recursive Gated Convolution (g^n Conv) that performs high-order spatial interactions with gated convolutions and recursive designs. The proposed module can serve as a plug-and-play module for both transformer and convolutional neural networks.\nBased on the proposed module, HorNet is further introduced, showing good results on ImageNet classification, COCO object detection and ADE20K semantic segmentation.\n\n \nPros:\n- Well organized and good writing.\n- Extensive experiments on multiple datasets and tasks.\n- The design of g^nConv is interesting and contributes much to the performance.\n- Limitations are stated in this paper.\n\n\nCons:\n- The motivation is not very clear to me. Why do we need high-order interactions?\n- Since the operator is Mul rather than MatMul, then the high-order interaction seems like an element-wise one rather than the spatial-wise interaction as stated in the paper. Or it can be recognized as a channel-wise one (element-wise multiply -> channel projection -> element-wise multiply ...)?\n- Could the author provide the complexity analysis for different n? See the Strengths And Weaknesses part See the Strengths And Weaknesses part",
" This paper proposes a new convolution-based deep neural architecture named $g^nConv$. Its essential operation is the gated convolution which multiplies the output of linear projection layers and depth-wise convolution layers. It also introduces high-order interactions with recursive gating. The authors conduct various experiments to show the effectiveness of the proposed results, including ImageNet classification, COCO object detection, and ADE20K semantic segmentation. Strengths:\n1. This is a well-written paper with carefully designed experiments.\n2. This paper combines several deep learning techniques in an interesting way to get a novel architecture for image classification, detection, and segmentation.\n\nWeaknesses:\n1. The experimental result section is weak. The authors did not compare their proposed method with the state-of-the-art methods. The authors are suggested to add more recent works for comparison, e.g., Dynamic Group Transformer [Liu etal, IJCAI 2022], MViTv2 [Li etal, CVPR 2022], Pale Transformer [Wu etal, AAAI 2022]\n2. The authors did not compare the throughput of each method. Since the proposed method involves a lot of small matrix multiplications that cannot be parallelized, and depth-wise convolutions, I guess this method should be slower than the other methods, even though their FLOPs are similar.\n\n 1. It would be better if the authors could theoretically analyze why High-order interactions can improve network performance.\n2. The authors are suggested to compare with more recent works and evaluate the throughput of each method. Yes",
" This paper proposes a new operation (gnConv) and network (HorNet) to perform computer vision tasks. Motivated by the success of vision Transformers, the key idea is to build an architecture that has input-adaptive, long-range and high-order spatial interactions. The authors did so by proposing a recursive form of gated convolution, which has the capability of going even higher order explicit spatial interaction than 2. On well-established computer vision benchmarks such as image classification on ImageNet, semantic segmentation on ADE20K, object detection on COCO, the authors showed consistent improvement over Swin Transformer and ConvNeXt. \n\nPOST-REBUTTAL UPDATE:\n\nI have read the authors' rebuttal. While I am a bit disappointed by the answer to my Q2, I am satisfied with the rebuttal overall and the paper. In my opinion / experience, in this day and age, it is not an easy feat to achieve the consistent improvement as the authors have shown in this paper, and I have decided to maintain my original rating of 8. Strengths:\n- The proposed architecture is very simple and effective. Higher-order interactions have been desired but have never been prevalent, possibly due to gradient problems. But the authors seem to have got it to work quite well.\n- Writing is very clear. \n- Experiments are thorough, and solid improvements are observed throughout.\n\nWeaknesses:\n- No major weaknesses. I just have a couple of questions.\n\nOverall, I think the originality, quality, clarity, and significance are all high. L316 states that HorNet currently runs slower than ConvNeXt. It would be good to clarify how many times slower. \n\nHorNet seems to outperform Swin Transformer and ConvNeXt pretty consistently. But have the authors considered comparison against models with better performance on ImageNet, such as CoAtNet [8]? Related, what are ways that will allow HorNet to achieve state-of-the-art on ImageNet classification?\n\nFinally, I think the higher-order interaction really originated from the gating mechanisms in LSTM. In vision, \"SORT: Second-Order Response Transform for Visual Recognition\" from ICCV 2017 is one of the earliest works. It would be great to add proper discussion on these. I am reasonably happy with the Limitations section of the paper from L315 to L318. As I wrote earlier, it would be good to be more transparent and specific. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"3anNWkWmFeDh",
"j9YaO4udujL",
"X_wvB6sOvGa",
"MzZTpdYb24",
"wpEacKUrgqH",
"IfJzWYf_Uuw",
"nips_2022_tro0_OqIVde",
"IX5190b0vH_",
"V_EoWH4iJXu",
"K0XufsrXDBo",
"-IuYWTa3tPR",
"nips_2022_tro0_OqIVde",
"nips_2022_tro0_OqIVde",
"nips_2022_tro0_OqIVde"
] |
nips_2022_I-ggHgon-Az | What You See is What You Classify: Black Box Attributions | An important step towards explaining deep image classifiers lies in the identification of image regions that contribute to individual class scores in the model's output. However, doing this accurately is a difficult task due to the black-box nature of such networks. Most existing approaches find such attributions either using activations and gradients or by repeatedly perturbing the input. We instead address this challenge by training a second deep network, the Explainer, to predict attributions for a pre-trained black-box classifier, the Explanandum. These attributions are provided in the form of masks that only show the classifier-relevant parts of an image, masking out the rest. Our approach produces sharper and more boundary-precise masks when compared to the saliency maps generated by other methods. Moreover, unlike most existing approaches, ours is capable of directly generating very distinct class-specific masks in a single forward pass. This makes the proposed method very efficient during inference. We show that our attributions are superior to established methods both visually and quantitatively with respect to the PASCAL VOC-2007 and Microsoft COCO-2014 datasets. | Accept | The paper proposes an attribution prediction approach to enhance the interpretability of DNN models. For this purpose, a second “explainer” model is used which can generality class-specific masks for the classification of relevant regions.
The reviewers have overall commended the novelty of the approach, clear writing, and detailed experiments. However, there were concerns about the training required for explainability which makes the approach relatively more computationally demanding. Given the performance is better than GradCAM, the approach still offers an advantage and a suitable alternative. The rebuttal provided further clarity and corrections in light of the initial reviews, these changes must be incorporated in the final version. The AC will also support incorporating a user study since the final goal of these attributions is to provide visual explanations for human users.
Based on the reviews and rebuttal, AC recommends accepting the paper and would like to congratulate the authors!
| train | [
"CqHSdWhm_I6",
"zqAoNCB8_D",
"paT6J9_42Fc",
"UOP3tdQTuGX",
"dwezajD8q7c",
"8F_O1R4Iot5",
"wW3xd2xZG_Q",
"3IMhr9PBwa7",
"lC5gFzWPAJy",
"h7XO2jPIp0X"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the detailed response. \n\nThe authors have clarified the problem setting context. I would argue that this is not a post-hoc explainability model in the classical sense. Using ground truth class labels for training, the masking network adds elements that are not faithful to the base classifier. Thus, I am unsure of the practical significance of the explanation. I would urge the authors to discuss this issue in the final draft if accepted.\n\nThank you for the description of the regions highlighted for labels other than the top predicted class.\n\nI rather view the strength of this method as an unsupervised method for object localization. It does not require any information about object bounding boxes, yet can reasonably locate the objects.",
" I would like to thank authors for their responses. Overall, it clarified many things. Now:\n - my questions about evaluation have been clarified. Now I see the evaluation as being strong\n - the paper still lacks clarity and if, accepted, my suggestion is to make an effort to improve presentation\n - the key part: technical contribution and idea. My view is that the contribution is on a thin edge: it draws heavily inspiration from image saliency, but it presents from a perspective that, as authors emphasized, is novel. Previous saliency methods could have been used for the same application, but they have not, at least to my current knowledge. Therefore, the paper can be seen, indeed, as opening new directions.\n",
" We thank all reviewers for providing us with their useful feedbacks for our contribution. We will update the paper accordingly. Please find our detailed answers below.",
" ### Weaknesses\n\n_About incremental contribution and dual black box strategy._\n\nApart from the “Real Time Image Saliency for Black Box Classifiers” paper by Dabkowski et al., over which we show significant advantages, we are not aware of any other methods using a neural network to explain another black-box network. The assessment that none of the loss terms used are entirely new is true. However, we have extended several previous ideas for the terms and have efficiently combined them in a way that allows us to retrieve more precise attributions than earlier works. \n\n_About stronger evaluation and EP attributions._\n\nIt is true that EP can be equally sharp but EP (where the mask is specifically optimized for each **test** image) is also many magnitudes slower. Methods like EP were also the reason to not use the full MS COCO dataset for some experiments, as it already took more than 1 day to attribute 1000 images with EP (see Table 3 in the Supplementary Material).\n\n_Other comments:_\n\nWe thank the reviewer for the additional suggestions. Regarding Figure 2, it was an oversight, the different symbols (F, m, S, E) will be added to the Figure to make it more clear.\n\n### Questions\n\n_About multi-label training task._ \n\nWe disagree with the reviewer, as we show several multi-label examples – see Figures 1 and 3 (Images 6, 9, 10, and 13) as well as many examples in the Supplementary Materials – and have performed numerical evaluations on the full datasets containing both single- and multi-label examples (Table 2). Note that we refer to multi-label as the process of training a model on multiple labels per image (as opposed to one label per image), so that the Explainer can directly attribute multiple object classes for a single image at inference time. \n\n\n_About attribution and segmentation connections._\n * _About the goal of the paper._ \nThe paper tries to solve the problem of generating attributions for a black-box classifier. The experiments on segmentations are only done as a means of evaluating against other methods, which we deem a reasonable assessment, despite the shortcomings of this evaluation. We discuss these limitations in lines 298-303.\n\n * _About assessment metrics and computational time._ \nEvaluating the quality of attribution methods has always been a very difficult task, which is why several other papers have proposed a wide range of evaluation metrics. We have tried to include several of these metrics to allow the reader to have a broader view on the overall quality of the different methods as there is no single standardized assessment for this task. \nRegarding the tests which could not have been run in full, we reiterate that this has been due to the slow execution speed of other methods, as it can be seen in Table 3, Section A of the Supplementary Material. While inference using VGG-16 on 1000 COCO-2014 test images takes 34s for our method, EP needs close to 34h and iGOS++ needs 26h40. Other datasets and model combinations show similar orders of magnitude of computational time. Instead of completely omitting these other baselines, we have decided that it is better to include them and to evaluate everything on a reduced dataset, to ensure a fair comparison.\n\n * _About the use of heat maps._ \nThe heat maps are the final output by attribution methods and are intended to serve as visual explanations for human users.\n\n_About the dependence of Explainer and Explanandum._ \n\nThe classifier’s predictions are used as a backpropagated learning signal for the Explainer, making the Explainer learn from the classifier directly.\n",
" _About explaining models using their trained weights._ \n\nIf we understand the reviewer correctly, the reviewer is suggesting that it is desirable to be dependent on the parameters of the model. We agree on this point. However, we argue that not accessing weights directly but by learning a model interpreting the outputs – which are dependent on the weights – is a more robust way to provide an explanation of a given pretrained classifier (the Explanandum).\n\n_About faithfulness._ \n\nWe design our method to provide an explanation for the output of a classifier rather than trying to explain the classifier internals itself (weights, layer order, etc). We observed that all our trained explainers are faithful in practice, as we show in Section C (containing Figures 4 and 5) in the Supplementary Material. There, we show that our explainer is actually very faithful to the classifier, as the average mask activation correlates really strongly with the probabilities by the classifier, even in cases where the classifier makes wrong predictions (e.g. rows 2,3, and 5 in Figure 4 or row 7 in Figure 5). \n\nBellow we provide detailed responses for a few questions raised by the reviewer.\n\n * _The third advantage (line 53) is confusing_ \nOur third point needs reformulation: At inference time, our Explainer directly generates attributions for any class without needing an explicit signal from the Explanandum. However, we are aware that in practice one may want to retrieve the classification probabilities from the Explanandum as well (which is of course possible and would just need two forward passes).\n\n * _Ground truth labels are used at training time._ \nWe use the ground truth labels and we acknowledge the concern about faithfulness from the reviewer as we have not sufficiently explained our incentives for doing it this way. The choice was between taking the same ground truth labels that the classifier was given during training on unobstructed images or taking the predicted labels and we chose the first option. We believe that the first approach is better at handling attributions for **false negative** predictions of a classifier since we still optimize the attributions for classes which aren’t predicted with high probability on the unmasked images. In turn, the second approach might be more suited if we expect the classifier to still make a significant amount of **false positive** predictions on the training set (which was **not** the case for the architectures/datasets we decided to explain). However, we argue that even the potential negative effect of **false positive** predictions is to some extent mitigated in our approach since we use the classification prediction of the classifier, which is not corrected with the ground truth labels. Finally, note that using the classifier predictions would require a thresholding operation, which would be an additional complication. In the end, we are aware that the Explainer requires an Explanandum which performs very well on the training set to minimize the effect of misleading training signals.\n\n * _Lines 475-482 (supplementary material) suggesting unfaithfulness._ \nThese lines are indeed confusing and should be reformulated. We strongly believe that our explainer is faithful to the classifier under the assumptions that we made above. It is true however, that we cannot rule out 100% of misleading attributions but those cases should be very rare.\n\n * _The reviewer noticed almost no highlights for regions other than the top predicted class._ \nWe disagree with the assessment that only the top predicted classes are highlighted. Even for probabilities of less than 10% given classifiers, the Explainer shows visible (and sometimes even very clearly localized) attributions (see rows 3, 6, 7, and 8 in Figure 4 or row 1 in Figure 5). For objects that are predicted with even lower probabilities, the learned attributions will not show significant enough values to be clearly visualized by the heatmap. Of course one could rescale the color map to also pick up these less pronounced signals from the mask, however we have chosen to use the standard mapping that other methods like Grad-CAM have used. \n\n\n_About the citation Adebayo et al [1] in line 65._ \n\nThis is indeed the wrong citation and will be corrected in the potential camera-ready version. \n",
" _About the comparison with Grad-CAM._\n\nWe agree that Grad-CAM is simpler. However, our approach is more accurate. It shows clearer boundaries (Figure 3) and better numbers (Tables 1 and 2). We acknowledge that in certain applications, a user might prefer Grad-CAM over our approach for its simplicity, but that in other applications having an accurate attribution is far more important than having a simpler method (e.g. in the medical domain). Furthermore, even though the Explainer requires a training process, at evaluation time it is equally fast as Grad-CAM.\n\n_About human validation through user study._\n\nIt is a fair observation. We did not consider it because we are not aware of any competing method doing this.\n\n_About one black box explaining another._\n\nYes, we have shown that we can unravel a black box using another. This is a different approach and a novelty of our contribution. We do not consider it as a weakness. ",
" ### Questions\n\n_About the training process of the Explanandum._\n\nThis might be a misunderstanding. We have not retrained the classifier with Explainer-masked data as well. This is only an idea for future work (see lines 326-328 in the paper). For this paper, the classifiers have only been (pre-)trained on the unmodified (i.e. unmasked) images from the VOC-2007 and COCO-2014 datasets.\n\n### Limitations\n\n_About tiny objects and occlusions._\n\nThe reviewer is correct that there might be a limitation for boundary-precise attribution of tiny objects as we push the mask to have a certain size between given limits. However, we see that in practice, the model is doing a very good job attributing tiny objects that co-occur with larger ones. A great example is given in Figure 7 in the Supplementary Material. In Image 8, the Explainer is able to precisely attribute a tennis ball, which only makes up a very tiny fraction of the entire image.\n\nRegarding the Explainer’s behavior with partially occluded objects, there aren’t any good examples for that in the main part of the paper since our visualizations are done with aggregated masks. However, a clear example can be found in the first row of Figure 5 in the Supplementary Material, where the attribution for the ‘car’ class only highlights car parts and does not include the people that are occluding parts of the car. In a revised version, we agree that more details on this interesting aspect should be included.\n",
" This manuscript focuses on to provide an elegant approach to visually attributing the classification result of a frozen, pre-trained,\nblack-box classifier called as Explanandum on the input image(s). The author provides a solid method for the second deep network (called \"Explainer\") to provide a robust mask and attributions for test images. Together with solid experiments and solid baselines, the author achieves competitive results for attributions on the well-known datasets of PASCAL VOC and MS-COCO. \n\n Clarity: \n++ The paper reads very well and provides a very good description of related work and background, motivating the problem. Even outside of the contribution of this paper, I would recommend this paper to people getting started with deep learning/understanding black-box classifiers as it provides a thorough description of the part of the pipelines it deals with.\n\nNovelty: \n++ The proposed approach formulation is concise, convincing, and novel. A seemingly reasonable approach has been conducted in this manuscript. Compared to the prior work on performing attributions for deep learning image classifiers, the current strategy involves supervised meta-learning to produce dense class-specific attribution masks, which balances the preservation and deletion of information from the input image(s) \n++ The idea of incorporating the Explainer to generate masks and retraining the Explanandum with the original as well as Explainer-masked data can ensure that classification accuracy can be high and free of influences from non-object regions. A pretty decent methodology has been applied to this attribution problem by the authors on this.\n\nExperiments: \n++ There are a number of experiments performed across datasets that are extensive, fair, and provide solid foundations. The fact that the proposed methodology achieves competitive results makes me confident in the result as the implementation and experiments are sufficient and presented clearly. Additionally, the improvements are fairly consistent. Besides, in-depth analyses have been provided on the approach for different tasks and enough information has been provided via a thorough analysis of where the benefits of the approach have been obtained.\n\nReproducibility: \n++ Thanks to the author's efforts in giving enough clarity about the given methodology, this can be replicated with some efforts. I believe this could be a significant contribution this time in NeurIPS. There is a small concern I faced in this manuscript about training the explainer, but little-to-less information has been provided on the black-box classifier (Explanandum), as we could see the explainer is trained to generate the masks, the retaining of the Explanandum with original + explainer-masked data information is missing in this section. Could I please request the authors to give some views on this as I tried to identify this root cause but found no such information on this? My assumption is that the author neglected on the part about tiny objects/having occlusions etc. I believe this methodology works great in general but general scenarios could also have tiny object and objects with occlusions, can the author please provides some views on this. Even though this approach sounds significant and I'm more inclining towards accepting this paper, as I found out this could potentially be a significant contribution to the computer vision community. ",
" The authors propose to train a deep network - the explainer to estimate attributions for a pre-trained black-box classifier. The attributions are defined as image masks, in particular two masks - the class-relevant mask and the rest. Local and smoothness constraints assist the explainer in generating crisp, highlight localized attribution masks. The authors evaluate the approach on multiple datasets and compare with standard explanation methods. \n Strengths\n\nThe ability to automatically generate reasonably accurate segmentation masks (albeit in an opaque manner) is a key strength of the approach. \n\nThe authors have experimented with multiple datasets and architectures to validate the method. \n\nThe paper is well-written and structured. It is easy to follow.\n\nWeakness\n\nIt is interesting to note that Grad-CAM results in comparable segmentation masks without the need to train an additional mask generator (Table 1). Grad-CAM extrapolates the low dimensional mask to the original image size that adds some noisy artifacts. Unlike the proposed explainer, it does not enforce any locality/smooth constraints. Despite these artifacts, Grad-CAM a very simple approach in contrast to the proposed explainer, can generate good segmentation masks. Thus, the proposed method’s advantage is unclear.\n\nWhile the authors provide a quantitative comparison of existing approaches through the segmentation dataset, as the aim is to provide a human interpretable explanation, a user study to validate the same will strengthen the contribution.\n\nFinally, the approach reads as a blackbox (mask generator) trying to explain another blackbox (classifier).\n\n Lines 29 and 30 - The authors portray the dependence of the explainability methods on trained model architectures and weights as undesirable. I however, would argue that this is desirable to enhance the faithfulness of the explanations tied to the trained model. The authors have to either elaborate more on their claim or provide references supporting their claim. The subsequent sentence (line 31) is more appropriate. There is evidence suggesting that the explanations generated through certain methods appear to be independent of model parameters (Adebayo et al., and Sixt et al).\n\nThe third advantage (line 53) is confusing. If explanandum is not required, then how does one ensure the faithfulness of the explanations to the explanandum? Perhaps the authors refer to a model-agnostic explainer?\n\nLine 65- Perhaps the citation is incorrect. Adebayo et al [1] do not handle issues arising from high sensitivity to noise and clipping effects of gradients. \n\nLine 114 - target label(s) for the input image - are these the predicted labels of the explanandum or the ground truth labels? If it is ground truth labels, then the explainer can never be faithful to the explanandum - what is the explainer then explaining? The authors should clarify if their method is post-hoc or explainable by design?\n\nI reiterate the initial concern on faithfulness based on the observations in lines 475-482 (supplementary material). The explainer does not seem to be faithful to the explanandum!\n\nFurthermore, I notice almost no highlights for regions other than the top predicted class (Figures 4, 5 supplementary material). How does the model explain non-zero probabilities assigned to the other classes?\n not applicable.",
" The paper proposes a method to identify regions of the images that identify and contribute decisively at identifying image by the decisive object. The task is approached by using two network entitled Explanandum (i.e. the model to be explained) and Explainer (the model that explains) and training process based on 4 losses. The method is evaluated in conjunction with Pascal VOC 2007 an MS-COCO 2014 datasets Strengths:\n - the idea of using another network to infer the internal mechanism of a base one is very appealing\n\nWeaknesses:\n - on the technical side, I view this paper contribution more closely to \"incremental\" than \"outstanding\". The idea to use two networks in a differentiating analysis is known. None of the loss terms used in optimization are new; the approach is related to image saliency and object segmentation\n- given the limited contribution on the idea side, the evaluation should have been stronger. PASCAL - VOC 2007 is the smallest version, MS COCO is larger, but only parts of it were used. On the visual comparison (fig 3- main paper), the proposed method is the sharpest, but EP, with another threshold on the heat map gives very similar results. \n\nOverall, the paper is not so strong on either side to be viewed as a clear accept. \n\nOther small issues:\n - If accepted, the evaluation section should be re-written to enhance the strengths (positive part). Also, the explanation of experiments details (database presentation, .. l196-205) needs to be separated from the actual experiments \n - figure 2 - is rather poor: Who is \\mathcal{F} in the figure? \\mathbf{m}? \\mathbf{S}? 1. To me the details of the problem to be solved are not firmly established. The paper says that it uses the \"multi-label classification tasks\" version of the two databases. Yet, only single class examples are shown in the evaluation. Than it speaks about \"attribution\" and at last about segmentation. I am missing some phrases (in different parts of the text) that: \n - say simply what the paper tries to solve\n - how can one evaluate the quality of this solution and why. The evaluation section carries the reader between different metrics and (unfortunately) excuses why some tests couldn't have been run in full. \n- how can one use the heat masks found\n\n\n2. The method is not explained well:\n- l.104 \"given pre-trained classifier\" \n- l 108 \"Explainer sees an image and outputs a set of masks S\". \n Then how the Explainer relates to the pre-trained classifier!? It seems to me that the two are independent. The first time \\mathcal{F} appears is at line 133, again it is not explained well The paper (approached problem and solution) does not raise any ethical or societal impact. Therefore, there is no (need for) discussion from this point of view.\n\nThe paper discusses technical limitations."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"dwezajD8q7c",
"h7XO2jPIp0X",
"nips_2022_I-ggHgon-Az",
"h7XO2jPIp0X",
"lC5gFzWPAJy",
"lC5gFzWPAJy",
"3IMhr9PBwa7",
"nips_2022_I-ggHgon-Az",
"nips_2022_I-ggHgon-Az",
"nips_2022_I-ggHgon-Az"
] |
nips_2022_p9_Z4m2Vyvr | Amortized Mixing Coupling Processes for Clustering | Considering the ever-increasing scale of data, which may contain tens of thousands of data points or complicated latent structures, the issue of scalability and algorithmic efficiency becomes of vital importance for clustering. In this paper, we propose cluster-wise amortized mixing coupling processes (AMCP), which is able to achieve efficient amortized clustering in a well-defined non-parametric Bayesian posterior. Specifically, AMCP learns clusters sequentially with the aid of the proposed intra-cluster mixing (IntraCM) and inter-cluster coupling (InterCC) strategies, which investigate the relationship between data points and reference distribution in a linear optimal transport mixing view, and coupling the unassigned set and assigned set to generate new cluster. IntraCM and InterCC avoid pairwise calculation of distances between clusters and reduce the computational complexity from quadratic to linear in the current number of clusters. Furthermore, cluster-wise sequential process is able to improve the quick adaptation ability for the next cluster generation. In this case, AMCP simultaneously learns what makes a cluster, how to group data points into clusters, and how to adaptively control the number of clusters. To illustrate the superiority of the proposed method, we perform experiments on both synthetic data and real-world data in terms of clustering performance and computational efficiency. The source code is available at https://github.com/HuafengHK/AMCP. | Accept | In this paper. the authors propose a novel amortized clustering method in which intra-cluster mixing and inter-cluster coupling are introduced. The optimal transport is used to learn the relationship between samles and reference distribution with intra-cluster mixing. The inter-cluster coupling assign samples to clusters and generates new clusters. The proposed method is novel in the sense that optimal transport is first introduced to amortized clustering, and its effectiveness is shown though synthetic and real datasets.
Many readers would be interested in this novel approach. | train | [
"HRDKNcHVjfB",
"K2l6rKehry",
"i1P1BD9Rfkf",
"D8HLg-Z8fX",
"nSC6r_5ZmqJ",
"rfBJT-GtWE-z",
"4A8a34DaAy",
"_nLxQ5rwhB0",
"h7VoeCOZSh7",
"_zZnrz7wEas",
"YXw6FaDc6EJ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your positive feedback. Our paper won't be better without your nice suggestions. Thanks again.",
" Thank you very much for increasing score! Our paper won't be better without your valuable suggestions. Thanks again.",
" I would like to appreciate the prompt and rewarding responses given by the authors.",
" Thank you for your thorough response, and for updating the paper with various clarifications. I have now updated my score to recommend acceptance. ",
" Thanks very much for your valuable and comprehensive comments!\n\n**Q1. The explanation of Figure 2 seems too concise. Difference from the existing work.**\n\nThanks for your suggestion, we add more detailed explanation of Figure 2 (Figure 1 (b) and (c) in the revised version). The proposed cluster-wise AMCP is compose of two key strategies: Intra-cluster mixing (IntraCM) and inter-cluster coupling (InterCC). IntraCM and InterCC performs to generate cluster one by one. Among them, IntraCM cooperates optimal transport from mixture view and describe the cluster's summary statistics with the aid of multiple EM steps. Then, InterCC regards the output of IntraCM as data coupling to generate the next cluster.\n\nFor the differences between our method and related work, we have reorganized the related works, and clear how the proposed method differs from the existing work in the revised version. Specifically, among existing amortized clustering models, ST-ACT inherits the feature of GMM which needs to manually set the number of clusters. DAC generalizes ST-ACT and enables to produce a varying number of clusters. Both ST-ACT and DAC can be regarded as simply amortized inference of Gaussian mixture with the aid of neural networks. The attention mechanism used in ST-ACT and DAC are much complexed than the proposed optimal transport mixing reducing the quadratic cost of traditional similarity weight computation. The most similar work to our methods is NCP and CHiGac. Both of them explain clustering from the perspective of generative models, which similar in spirit to the popular Gibbs sampling algorithm for Dirichlet process mixture models, but without positing particular priors on partitions. Besides, NCP and CHiGac rely heavily on the selecting of anchor points and the processing ordering and often exhibits unstable properties. Different from previous work, AMCP connects the relationship of mixture model and optimal transport to describe the cluster's summary statistics well, and seamlessly combines mixing and coupling to achieve efficient amortized clustering.\n\nMore detailed analyses are also given in revised Supplementary Materials (B.4).\n\n**Q2. Whether the performance is sensitive to the choice of prior in generative process.**\n\nThe parameter of prior distribution indicates the number of classes in the dataset we want to generate. As we know, both MNIST and CIFAR-10 have 10 classes and Tiny-ImageNet have 200 classes. To formulate the amortized mechanism, for all dataset, data containing half of the classes is used to generate the training set, and the remaining half is used to generate the test set, with no overlap between training classes and test classes. To better understand the generative process of training and testing data, we reformulate the generative process given in the main manuscript, i.e., \n\n$\\alpha \\sim Exp(1)$, $\\ c_{1:N} \\sim CRP(\\alpha)$, $K-1 \\sim Binomial(K_g -1, 0.5)$, $l_{k} \\sim U(0,K-1)$, $x_i \\sim U[D, l_{c_i}]$. \n\nHere $x_i \\sim U[D, l_{c_i}]$ indicates sampling $x_i$ with label $l_{c_i}$ uniformly from dataset $D$, which can be training set or testing set. $K_g$ indicates the number of clusters we can sample, $K$ is sampled from Binomial distribution with parameter $K_g$ and 0.5. In this case, the parameter $K$ indicates the number of classes we are able to sample. More detailed data set information can be found in Supplementary Material (C.1)\n\nIn this case, we conducted experiments by sampling training datasets with different $K$ (the number of classes) and evaluate the model performance. We compare different amortized clustering models under different training sets (contains 1-5 classes respectively) on MNIST dataset. Note that we define the test set contains all test classes (the remaining 5 classes). As shown in the following table, we list accuracy comparison of different methods under different $K$. We can see that the accuracy of all methods increases with the number of classes, because more classes in the training data help the test data to scale quickly. AMCP can achieve optimal results in all training scenarios, which proves the effectiveness of AMCP. More comprehensive experimental results and analysis on all datasets are given in Supplementary Material (C.2.5).\n\n|Methods |\t1|\t2|\t3|\t4|\t5|\n| ---- | :----: | ---- | ---- | ---- |---- |\n|DAC\t|0.2341|\t0.4425|\t0.7125|\t0.8543|\t0.9796|\n|NCP|\t0.2015|\t0.4236|\t0.7052\t|0.8325|\t0.9633|\n|ST-ACT\t|0.1989|\t0.4201\t|0.7011\t|0.8235|\t0.9596|\n|AMCP|\t0.2488\t|0.4569|\t0.7226\t|0.8721\t|0.9845|\n\nFinally, thanks again for your valuable comments.\n",
" Thanks very much for your valuable and comprehensive comments!\n\n**Q1. For easier readability, it might make sense to merge Figures 1 and 2.**\n\nWe have merged Figure 1 and 2 in the revised version.\n\n**Q2. Difference to existing amortized clustering**\n\nAmong existing amortized clustering models, ST-ACT inherits the feature of GMM which needs to manually set the number of clusters. DAC generalizes ST-ACT and enables to produce a varying number of clusters. Both ST-ACT and DAC can be regarded as simply amortized inference of Gaussian mixture with the aid of neural networks. The attention mechanism used in ST-ACT and DAC are much complexed than the proposed optimal transport mixing reducing the quadratic cost of traditional similarity weight computation. The most similar work to our methods is NCP and CHiGac. Both of them explain clustering from the perspective of generative models, which similar in spirit to the popular Gibbs sampling algorithm for Dirichlet process mixture models, but without positing particular priors on partitions. Besides, NCP and CHiGac rely heavily on the selecting of anchor points and the processing ordering and often exhibits unstable properties. Different from previous work, AMCP connects the relationship of mixture model and optimal transport to describe the cluster's summary statistics well, and seamlessly combines mixing and coupling to achieve efficient amortized clustering.\n\nWe have reorganized the related work, and clear how the proposed method differs from the existing work in the revised version. More detailed analyses are also given in revised Supplementary Materials (section B.4).\n\n**Q3. The significance of this type of work will remain relatively limited.**\n\nI agree with you that this type of work has limited results right now. I think the work of amortized clustering makes sense, especially when we are faced with classes of data that we have never seen before. That's why we focus on this type of work. In our work, our experiments on synthetic and real-world datasets (MNIST, Tiny-ImageNet, and CIFAR-10) hope to verify the effectiveness of the model (the ability of generalization to unseen classes).\n\n\n**Q4. Could you please clarify what is this \"training set\" refers to in section 4.1?**\n\nAt each training step, we generate 10 random datasets according to the generative process. Each dataset contains 200 points on a 2D plane, each sampled from one of 4 Gaussians. Thank you for pointing out our lack of detail, we have described it in more detail in the revised version.\n\n**Q5. The results in Tables 1 and 2.**\n\nIn Table 1 and 2, the average on 5 runs are reported. For MNIST toy-clustering results in Table 3 only one random selected testing result for each method is visualized. We have cleared this point in the revised version.\n\n**Q6. It could be interesting to see whether this is the case throughout training.**\n\nThanks for your suggestion, please forgive us for not being able to show pictures in rebuttal’s reply box, and we give the training and test performance vs. epoch for NCP and our AMCP on synthetic data and CIFAR-10 dataset in the Supplementary Material (C.2.3). AMCP can achieve better performance than NCP, and AMCP converges faster than NCP.\n\n**Q7. It could be more insightful to see a qualitative toy example.**\n\nTo illustrate how the clustering models capture the shape ambiguity of some of the images, we visualize the ground-truth image clusters and clustering results (obtained by DAC, NCP, ST-ACT and AMCP respectively) on MNIST in the main manuscript, as shown in Table 3. In original Supplementary Material (C.2.1), we give full experimental results of other two datasets (Tiny-ImageNet and CIFAR-10). In MNIST, DAC assigns the digit 7 (with similar appearance to 9) to cluster 9, and NCP generates a new cluster for it. Fortunately, ST-ACT and DAC correctly assign it to cluster 7. All baselines are unable to assign the digit 4 written in a strange way to the right cluster, and our AMCP correctly assigns it to cluster 4. Similar results can be found in other two datasets.\n\nTo further investigate the model efficiency, we compare AMCP with NCP by the histograms of log-likelihood. The histograms of the log-likelihood per test example are presented in Supplementary Material (C.2.4). We notice that all histograms characterize a heavy-tail indicating existence of examples that are hard to represent. However, taking a closer look at the histograms for AMCP reveals that there are less hard examples comparing to the NCP, which indicates the proposed optimal transport mixing is beneficial for exploiting pair-wise interaction and further improving clustering performance.\n\nFinally, thanks again for your valuable comments.\n",
" Thanks very much for your valuable and comprehensive reviews!\n\n**Q1. The contributions of this paper.**\n\nHere we give the summarized contributions of our work as follow:\n- *Efficiency*. The proposed AMCP not only inherent the efficient cluster-wise learning manner, but also reduce the quadratic cost of traditional similarity weight computation with the aid of optimal transport mixing. \n- *Novelty*. To the best of our knowledge, AMCP is the first to introduce optimal transport for amortized clustering and effectively correlate the relationship between optimal transport and mixture models.\n- *Adaptivity*. Non-parametric learning manners exist not only in model parameter learning but also in cluster generation, which allows unsupervised parameter learning and non-handcrafted intervention. \n- *Flexibility*. Different from existing amortized clustering methods without changing the already-generated clusters, AMCP is able to dynamically adjust the previous generated clusters to avoid the accumulated errors with the aid of mixing and coupling manners.\n- *Effectiveness*. Extensive experiments conducted on both synthetic and real-world datasets demonstrate that AMCP can cluster data effectively and efficiently. \n\nWe have added the above summarized contributions to Introduction part in the revised version.\n\n**Q2. How is it different from existing amortized methods?**\n\nAmong existing amortized clustering models, ST-ACT inherits the feature of GMM which needs to manually set the number of clusters. DAC generalizes ST-ACT and enables to produce a varying number of clusters. The attention mechanism used in ST-ACT and DAC are much complex than the proposed optimal transport mixing reducing the quadratic cost of traditional similarity weight computation. The most similar work to our method is NCP and CHiGac. Both of them explain clustering from the perspective of generative model, which relies heavily on the selecting of anchor points and the processing ordering and often exhibits unstable properties. Different from previous work, AMCP connects the relationship of mixture model and optimal transport to describe the cluster's summary statistics well, and seamlessly combines mixing and coupling to achieve efficient amortized clustering.\n\nWe have reorganized the related work, and clear how the proposed method differs from the existing work in the revised version. More detailed analyses are also given in revised Supplementary Materials (B.4).\n\n**Q3. Lack of introduction of optimal transport (OT). What is the assumption of the proposed clustering method?**\n\nActually, we have given a detailed introduction of OT, entropic regularized Kantorovich relaxation, and relations to a clustering problem in original Supplementary Material (A and B). In the revised version, we have slightly reorganized the section 3.2.1 and give more introduction of OT and it’s connection to our method.\n\nIn AMCP, the introduced IntraCM sufficiently exploit pairwise interactions between data points in intra-cluster level and benefit for the next inter-cluster coupling, which is able to sufficiently mine the hidden structure among data. Unlike the quadratic cost of the self attention mechanism used in ST-ACT and DAC, IntraCM uses a fixed number of references serving as queries, which is computational efficient, scalable to large dataset. We have cleared this point in section 3.2.1.\n\n**Q4. The converge guarantee of objective function.**\n\nBy repeating the cluster generative process until there are no data points left, IntraCM and InterCC are processed alternatively, and the clusters are iteratively refined by deriving the explicit supervisory signal from the already formed clusters. This type of learning procedure is similar to the two-stage unsupervised clustering method, e.g., DeepCluster [1], which adopts pseudo labels from clusters. Although our AMCP does not contain a pseudo-label supervised learning process, IntraCM is performed on both already formed clusters and remaining unassigned data points, which can be regarded as learning process for pseudo labels. In this case, we can say AMCP is able to converge. We have cleared this point in the revised version. Furthermore, we provide the training and test performance vs. epoch for AMCP on synthetic data and CIFAR-10 in the Supplementary Material (C.2.3), which empirically proves the model convergence.\n\n[1]Caron M, et al. Deep clustering for unsupervised learning of visual features, ECCV. 2018\n\n**Q5. In Table 1, does the time mean training time?**\n\nThe time listed in both Table 1 and 2 is average testing time on 5 runs since we want to show how quickly the model scales to the unseen test classes. We also provide training times in Supplementary Material (C.2.5).\n\n**Q6. The limitation of this work.**\n\nLimitations of this work are presented in the conclusion, such as the inapplicability to hierarchical data and the lack of interpretability and controllability.\n\n\nFinally, thanks again for your valuable comments.\n",
" We would like to thank the reviewers for their helpful feedback and insightful comments and AC and SAC for their efforts in the review work. We answered the questions raised by each reviewer individually. The revised paper and corresponding supplementary material were also submitted. The modified parts are marked in blue font. Thanks again.",
" In this paper, the authors propose a method called amortized mixing coupling processes for clustering (AMCP). ACMP learns the relationship between samples and reference distribution with intra-cluster mixing (IntraCM); while assigning samples to clusters and generating new clusters with inter-cluster coupling (InterCC). The ACMP method is tested on synthetic and real-world datasets in the experiments. Strengths:\n\nThe authors propose a new clustering method called AMCP.\n\nThe method outperforms its competitors in terms of clustering performance and computational efficiency.\n\nWeaknesses:\n\nThe contributions of the paper are not clear. The authors do not introduce the related work in detail, and it is not clear how the proposed method differs from the existing methods. How is the InterCC related other amortized clustering methods? Is the paper the first one that makes use of optimal transport for solving a clustering problem? \n\nThe authors might need to briefly introduce an optimal transport plan, entropic regularized Kantorovich relaxation, and how these concepts are related to a clustering problem. The assumption behind the proposed clustering method is also not apparent. Why is the relationship between the samples and reference distribution a proper way to define clusters?\n\nI am not sure whether the proposed optimization procedure converges. It looks like in Equation (11), $b_{i,j}^{(k)}$ and $C_k$ are dependent. Therefore, dividing the optimization process into IntraCM and InterCC might not guarantee the convergence of the algorithm. \n 1. What are the contributions of this paper?\n2. What is the assumption of the proposed clustering method? How is it different from existing amortized methods?\n3. Is the objective function defined in Equation (11) guaranteed to converge?\n4. In Table 1, does the time mean training time?\n The authors did not address the limitations and potential negative social impact of their work.\n",
" The paper proposes an amortised clustering process model, where the clustering prior is learnt in a meta-learning setting. \n\nInference is performed cluster-wise sequentially, involving inter- and intra-cluster similarities. To avoid calculations involving distances between all pairs of clusters, the authors use a linear optimal transport strategy. \n\nThe method is demonstrated on synthetic gaussian mixture models as well as meta-learnt clustering in MNIST, Tiny-ImageNet and CIFAR-10. The paper is well-written and quite clear, though some parts (sections 3.2.1 and 3.2.2) are quite dense with equations. Minor suggestion: For easier readability, it might make sense to merge Figures 1 and 2. \n\nThe idea to make use of optimal transport within the inference scheme is interesting and novel, and the authors demonstrate that it's beneficial over the standard EM algorithm. I also liked how the connection between their OT-based approach and ISAB was outlined in the appendix. \n\nThe authors emphasise that their approach is sequential per cluster and not per data point. While true, it seems to me that a clusterwise approach has been previously presented in [Pakman et al, 2020]?\n\nI believe the significance of this type of work will remain relatively limited (In fact, I haven't seen _any_ of the existing work in amortised clustering processes being used for any non-toy tasks yet. On the other hand, one could argue that developing better models/inference algorithms can have the potential to change it -- in that case, ideally, it would be desirable to see convincing qualitative improvements over previous results, or a non-toy application example). \n\nI have some questions about the experiments, I have outlined them below. * In section 4.1, I am confused by the statement \"In this case, the training set contains 200 samples belonging to 4 clusters\". I thought that amortised clustering methods would be trained on multiple datasets by repeated sampling from the CRP prior, with a variable number of clusters. Could you please clarify what is this \"training set\" refers to?\n\n* Are results in Tables 1 and 2 based on a single test set case, or averaged over multiple (potentially varying-sized) test sets? For example, for MNIST visualisation, a subset is used, but what about the results in Table 2? \n\nMinor: \n\n* The results show that the proposed method outperforms other amortised methods at test time. It could be interesting to see whether this is the case throughout training (e.g. how the training/test curves look like as a function of iterations)?\n\n* Right now, experiments are only presented as summary statistics. While this is common, it could be more insightful to see a qualitative toy example, e.g. some visualisation where the proposed OT-based methodology behaves differently to an existing model (say NCP). No concerns",
" This paper proposes a novel amortized clustering framework coupled with two key strategies named Intra-Cluster Mixing and Inter-Cluster Coupling. Intra-Cluster Mixing utilizes optimal transport to learn the plan for distributing the mass of data to differential reference vectors. Inter-Cluster then resorts to this plan to form the next cluster and update existing clusters. According to the results on both synthetic and real-world datasets, the proposed method enjoys superior performance and computational efficiency. Strengths:\nThe proposed method is novel and practical. In contrast to previous point-wise amortized clustering methods, the proposed framework directly models the cluster generative process. Introducing optimal transport in the E step to optimize the ELBO of mixing likelihood is also a highlight. Besides, this paper is well-organized and easy to follow. The proposed method is technically sound.\n\nWeaknesses: The explanation of Figure 2 seems too concise. It would be better if more details are provided to make the proposed learning procedure more understandable. Besides, the difference between the proposed method and the literature [23] is not clear. The authors are encouraged to provide more details on the differences. \n The settings of prior distribution are directly given in experiments, e.g., $l_{1,K}\\\\sim U(0,9)$ in line 302. I wonder whether the performance is sensitive to the choice of prior. Limitations of this work are presented in the conclusion. This work is a technical clustering approach and has few social impacts."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"i1P1BD9Rfkf",
"D8HLg-Z8fX",
"nSC6r_5ZmqJ",
"rfBJT-GtWE-z",
"YXw6FaDc6EJ",
"_zZnrz7wEas",
"h7VoeCOZSh7",
"nips_2022_p9_Z4m2Vyvr",
"nips_2022_p9_Z4m2Vyvr",
"nips_2022_p9_Z4m2Vyvr",
"nips_2022_p9_Z4m2Vyvr"
] |
nips_2022_INzRLBAA4JX | Revisiting Sparse Convolutional Model for Visual Recognition | Despite strong empirical performance for image classification, deep neural networks are often regarded as ``black boxes'' and they are difficult to interpret. On the other hand, sparse convolutional models, which assume that a signal can be expressed by a linear combination of a few elements from a convolutional dictionary, are powerful tools for analyzing natural images with good theoretical interpretability and biological plausibility. However, such principled models have not demonstrated competitive performance when compared with empirically designed deep networks. This paper revisits the sparse convolutional modeling for image classification and bridges the gap between good empirical performance (of deep learning) and good interpretability (of sparse convolutional models). Our method uses differentiable optimization layers that are defined from convolutional sparse coding as drop-in replacements of standard convolutional layers in conventional deep neural networks. We show that such models have equally strong empirical performance on CIFAR-10, CIFAR-100 and ImageNet datasets when compared to conventional neural networks. By leveraging stable recovery property of sparse modeling, we further show that such models can be much more robust to input corruptions as well as adversarial perturbations in testing through a simple proper trade-off between sparse regularization and data reconstruction terms. | Accept | In this paper, the authors introduce a convolutional sparse coding layer, which is intended as a replacement for a convolutional layer that is has greater interpretability and stability. Experiments show that a ResNet modified with this CSC-layer can achieve comparable performance on standard datasets as convolutional networks and are more robust to noise. The strength of this paper is that the novel layer it proposes is faster than previous space coding networks and that it has comparable accuracy and speed to ResNets while being more robust. A weakness of the paper is that the claims of improved interpretability do not transfer to the network as a whole. The strengths of the paper outweighs the weaknesses, and the authors should clarify in the camera ready that interpretability is only intended layer-wise. | train | [
"90W88iuDif",
"ZJqNiWIp0q0",
"vapRlkFEwmp",
"I1RilkEObZ",
"XxDQii6l0Ju",
"mY3EYbkobI0",
"WeMOMstw1TUf",
"fy5BdC95bwe",
"96rTFKvFlxz",
"7_7MvUX6HN",
"e_vbTgs8Jp1",
"7CxjdvC29S",
"JlPDGbyfO0j"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer EVjq,\n\nFor Q1, it might be caused by the number of FISTA iterations. We will conduct more ablation studies and visualization in the future version. \nAlso, thanks for the suggestion about the visualization, we will add a border between different filters to make better visualization.",
" Thanks to the authors for the clarification. I have no future questions. This is a great exploration to improve the robustness of the neural network to input perturbations by sparse coding. I agree with the reviewer EVjq that visualizing the learned sparse dictionary convolutional kernels will make it more insightful. Please include the discussions and the visualization in the final version. ",
" We thank the reviewer for acknowledging our work. To answer your question, we test the time cost on the CIFAR10-C dataset with 10000 samples using 1 GPU. As declared in our previous reply, we need to perform #types_of_noise $\\times$ #noise_levels forward processes for the training data (or a subset of the training data) to find the relationship of optimal $\\lambda$ as a function of $r$ (i.e., the function in Alg. 1, line 12). It takes about 50-100 forward processes (depending on the datasets), each of which costs about 5s. So the total time cost would be a few minutes. Once this stage finishes, the following prediction of the labels only takes 2 more forward processes, which is negligible.",
" I thank the authors for the detailed response and experiments! I have increased my score. It would be great if the authors could further give out the computing time of Algorithm 1. ",
" Dear reviewer nUHw: \n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.",
" Dear reviewer WxVJ:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.",
" I thank the authors for the detailed response and additional analysis and experiments! Please find below some follow-up comments.\n\nAdditionally, I’d also like to apologize that I had prepared a “Room for Improvement” section in my original review (please find below) but when transferring to the Open Review platform missed to include it. However, my questions reflected most of the concerns raised there so my omission doesn’t influence the current discussion.\n\n**Room for improvement**\n- I believe it would be helpful to visualize the learned sparse dictionary convolutional kernels which is common in other related work (e.g. [1]) and would provide insight into the signals that the dictionary can encode. \n- The classification experiments are missing confidence intervals. I believe the results would be enhanced if error bars are included. \n- I pose a few clarifications on the choice of hyperparameters and experimental setup that would enhance interpreting the results in the paper.\n- To promote reproducibility, it would be helpful for the authors to provide an open-source implementation of the proposed method.\n\n**Q1**: Visualizing the dictionary elements of the first ResNet convolutional layer 64 x 3 x 7 x 7.\n\nMost of the convolutional filters look noisy and unlike the edge, orientation, and center-surround detectors typically found in sparse dictionaries. I wonder whether this is due to the training not being fully converged or maybe the fact that only 2 iterations of FISTA are used. Also, it would be helpful to add a border between the different filters in the figure in order to separate them.\n\n**Q2**: Thank you for this additional result on how the number of FISTA iterations affects the model’s robustness to noise.\n\n**Q3**: Thank you for clarifying that only the bottom layer of ResNet is replaced by a CSC-layer.\n\n**Q4**: Thank you for specifying how the value of lambda was selected and for providing a histogram with the distribution of feature map values.\n\n**Q5**: Thank you for clarifying the magnitudes of levels 0-6 in Figure 2.\n\n**Q6**: Thank you for providing the additional baseline.\n\n**Q7**: Thank you for planning to release the implementation.\n",
" We thank the reviewer's comments but disagree with the suggested rating: our paper is both technically sound and experimental evaluation is rather thorough and convincing. We clarify some of your concerns or possible misunderstandings below, which hopefully can change your opinion about our paper:\n\n>Q1: Interpretability. The traditional convolutional layer performs a forward computation (the output is a linear combination of the inputs). In contrast, the convolutional sparse coding (CSC) layer performs a backward computation (the input is a linear combination of the outputs). It is not apparent why a backward computation is more interpretable than a forward one. In my opinion, it is not an individual layer that makes a neural network hard to interpret but the stack of these layers. While convolution layer and convolutional sparse coding are easy to interpret individually, using them in deep networks (with nonlinearities, normalization, etc.) is not.\n\nA: We note that we have never claimed that the CSC layer offers interpretability of the entire deep neural network. Rather, our claim is that the CSC layer itself offers interpretability, in the sense that it models the input as a sparse linear combination of a (learned) convolutional dictionary. Importantly, such an interpretation allows us to design a new technique for improving network robustness by leveraging the stable recovery properties of the sparse modeling, as well as a means of visualizing feature maps due to the fact that a CSC layer is (locally) generative and can naturally reproduce the input from its output. Notably, standard forward convolution layers do not provide such means of obtaining robustness and for feature visualization (hence interpretation). \n\nWe realized that our usage of “interpretability” in line 55 may have caused confusion for the reviewer. Hence, we have updated our writing in the revised version to further clarify.\n\n>Q2: Robustness. The explanation of why the CSC layer is more robust is insufficient --- no math derivation is used to explain this concept. From the writing, it seems the Lipschitz constant decreases if the regularizer increases. In this case, the comparison to standard convolutional networks is not fair. Maybe the authors would also like to enforce the Lipschitzness of traditional convolutional networks (e.g., following https://arxiv.org/abs/1804.04368).\n\nA: The fact that CSC is robust to input perturbation is well-established in previous work [42, Theorem 19] as we have discussed in Sec. 3.3. In the revised version, we have explicitly included a restatement of such results with rigorous mathematical characterization to more clearly explain the concept. \n\nRegarding Lipschitz constant: While we have never computed Lipschitz constant for our proposed SDNet, we agree with the reviewer that our method should have a smaller Lipschitz constant as it provides a stable recovery for the input. However, unlike commonly used techniques for improving Lipschitzness properties that usually improves robustness at the cost of a lower performance on clean data, our technique does not affect the performance on clean data at all.\n\n>Q3: Computational Complexities. It looks to me that the proposed layer is quite expensive. In the experiment, only one layer in ResNet is replaced by the proposed layer, and only two iterations are used in unrolling. And this already decreases the speed from 1000 to 900. I think a more comprehensive study on the relationship between accuracy, complexity, and iterations is needed when all layers are replaced.\n\nA: The following table shows the comparison of SDNet-18 and SDNet-18-All on accuracy, complexity. SDNet-18-All means all convolution layers are replaced with CSC-layer. And the number of FISTA iteration is two for all CSC-layers, hence the complexity is only twice. In the new supplementary material, we have also conducted ablation studies on the number of iterations on ImageNet, see Table D.1.\n\n| | Model Size | Top-1 Acc | Memory | Speed|\n|----------------------|-------------------|------------------|-----------------|------------|\nSDNet-18 | 11.2M | 95.20% | 1.2GB | 1500 n/s |\nSDNet-18-all | 11.2M | 95.18% | 2.5GB | 720 n/s |\n",
" We thank the reviewer for the careful reading of our manuscript and the constructive remarks. Here we reply to those specific questions.\n\n>Q1: Have the authors considered visualizing the learned sparse dictionary convolutional kernels common in related literature ? I believe this would help with interpretability and understanding what the dictionary of convolutional kernels encodes.\n\nA: Thank you for the comment. We visualize the learned dictionary and post it in the appendix (Figure C.1) of the revised version.\n\n>Q2: Typically, the FISTA algorithm requires hundreds of iterations to converge so my expectation is that the reconstructions x=Az with only 2 iterations are not high fidelity (e.g., terms of PSNR). This is supported by the visualization in Appendix B2 which shows that feature maps only encode contours or high-level information about the input. The authors mention that increasing the number of FISTA iterations can boost the classification performance a bit. Have the authors’ studied how increasing the number of FISTA iterations affects the model’s robustness to noise or can they provide intuition about it?\n\nA: Thank you for the comments. The following table shows how the number of FISTA iterations affects the model’s robustness to noise. The model is trained on the ImageNet dataset. The “natural accuracy” column is the accuracy tested on the validation set of ImageNet, the columns “Gaussian”, “Shot”, and “Impulse” are three different noises from ImageNet-C. We report the top-1 accuracy results with adaptive lambda. While using more iterations slightly increases the model performance on both natural accuracy and robust accuracy. \n\n|# of FISTA iterations |natural accuracy | Gaussian | Shot | Impulse|\n|----------------------------|---------------------|-------------|-----------|-----------|\n|2 | 69.47% | 29.16% | 27.59% | 22.01%|\n|4 | 69.51% | 29.69% | 28.15% | 24.15%|\n|8 | 69.79% | 30.91% | 29.87% | 26.69%|\n\n>Q3: My understanding is that only the first convolutional layer of ResNet-18 and ResNet-34 (the one closest to the input) is replaced by a CSC-layer. Is this correct or does “the first convolutional layers” (line 235) refer to the first convolutional layer of each ResNet block?\n\nA: Yes, only the first convolutional layer of ResNet-18 and ResNet-34 (the one closest to the input) is replaced by a CSC-layer.\n\n>Q4: How is the value of lmdb=0.1 used during training selected? What is the size of C used in experiments, i.e. the number of sparse feature maps in (line 125)? How sparse on average are the feature maps output by FISTA when only 2 iterations are used with regularization coefficient?\n\nA: The value of $\\lambda$ was selected based on grid search and the one corresponding to the best test accuracy was chosen. The number of sparse feature maps is the same as the channel number of ResNet in each layer, which are 3 -> 64 -> 128 ->256 -> 512 as in each block of ResNet18/34. We also test the sparsity of the feature map on all 10000 CIFAR-10 test samples and find that 52% values are exactly 0, while the feature map of the convolutional layer in ResNet is dense. The histogram of the feature map absolute values is shown in the appendix (Figure D.1) of the revised version. \n\n>Q5: What magnitudes do levels 0-6 in Figure 2 correspond to for each type of noise? E.g. for Gaussian noise, what levels of noise are considered? Same for Tables 2 and 3.\n\nA: In our experiments, we use the CIFAR-C and ImageNet-C data. The noises are added to the clean data with pixel values in the range of [0, 1]. The specific noise parameters from severity level 1-5 are as follows. For the gaussian noise, the standard deviation is 0.08, 0.12, 0.18, 0.26, 0.38. For the shot noise, the value of parameters are 60, 25, 12, 5, 3. For the impulse noise, the amount of s&p impulses are 0.03, 0.06, 0.09, 0.17, 0.27. For the speckle noise, the standard deviation of the gaussian multiplier is 0.15, 0.2, 0.35, 0.45, 0.6. For the detailed implementation, please check the code of [1].\n\n[1] Hendrycks D, et al. Benchmarking neural network robustness to common corruptions and perturbations[J]. arXiv:1903.12261, 2019.\n\n>Q6: Have the authors studied how robust SD SCN model is to additive noise? Would be insightful to add it as a baseline in Tables 2 and 3.\n\nA: We test the robustness of SCN on CIFAR10-C and show its results in Table 2,3 of the revised version. The results are very close to our SDNet18 w/ $\\lambda$=0.1 \n\n>Q7: Are the authors planning to provide an implementation of the proposed framework?\n\nA: Thank you for the suggestion, we will release the code. \n\n>Q8: Nitpicks. Figures 2 and 3 would be easier to read if the font size is bigger.\n\nA: Thanks for pointing out this. We have fixed it in the revised version.\n",
" We thank the reviewer for the careful reading of our manuscript and the constructive remarks. Here we reply to those specific questions.\n\n>Q1: In Table 1, the model size of SDNet-18 and SDNNet-34 on CIFAR-100 are much smaller than on CIFAR-10, which seems weird.\n\nA: Thank you for the comments. It is indeed a typo. The model size of SDNet-18 and SDNet-34 on CIFAR-100 are 11.3M and 21.2M, respectively. We have fixed it in the revised version.\n\n>Q2: With similar performance, the proposed method is much faster than its baselines. In Table 1, the proposed SDNet only replaces the first convolutional layer with CSC-layer while SCN is a multilayer sparse coding network. For a fair comparison, this paper may compare the time and memory consumption of a single sparse coding layer between those methods.\n\nA: Following the reviewer’s suggestion, we replace the first convolution layer of ResNet18 with the sparse code layer from SCN[1], and keep the parameters the same as ResNet18 such as channel, strides, kernel size, etc. The comparisons of model size, test accuracy, memory used during training, and training speed are shown as follows:\n\n| CIFAR10 | Model Size | Top-1 Acc | Memory | Speed |\n|---------------|-------------------|------------------|----------------|---------|\n|ResNet18 | 11.2M | 95.54% | 1.0GB |1600 n/s |\n|SCN | 0.7M | 94.36% | 10.0GB | 39 n/s |\n|SCN-first | 11.2M | 95.12% | 3.5GB | 158 n/s| \n|SDNet18 | 11.2M | 95.20% | 1.2GB | 1500 n/s |\n\n\n\n| CIFAR100 | Model Size | Top-1 Acc | Memory | Speed |\n|---------------|-------------------|------------------|----------------|---------|\n|ResNet18 | 11.2M | 77.82% | 1.0GB |1600 n/s |\n|SCN | 0.7M | 80.07% | 10.0GB | 39 n/s |\n|SCN-first | 11.2M | 78.59% | 3.5GB | 158 n/s| \n|SDNet18 | 11.2M | 78.31% | 1.2GB | 1500 n/s |\n\nIt can be seen that SCN-first is still much slower than our SDNet. \n\n>Q3: Each layer of CSC-layer of SDNet-18 and SDNet34 needs unrolling two iterations of FISTA and more iterations will only slightly improve the performance. As SDNet-18 and SDNet-34 have only one CSC-layer for the input images, I’m curious whether it is this low dimension (3 channels) of input that make two iterations sufficient. On SDNet-18-All and SDNet-34-All, could you list the dimension of the input and output of each CSC-layers and their corresponding iterations used?\n\nA: In SDNet18/34-All, the dimensions of the input and output of each CSC-layers are precisely the same as the one corresponded in ResNet 18/34, which are 3 -> 64 -> 128 -> 256 -> 512. And 2 FISTA iterations are used in all CSC-layers. We have conducted the ablation study on ImageNet, and we find that SDNet-18 with 2, 4, and 8 iterations of FISTA obtains 69.47%, 69.51%, and 69.79% Top-1 accuracy, respectively. While using more iterations slightly increases the model performance, it comes at the cost of increasing the training time and memory requirement as a result of unrolling of the FISTA algorithm. Hence, in all our experiments we use SDNet with 2 iterations. We will leave the exploration of adaptive FISTA iteration for different layers in the future work. \n\n>Q4: What is the cost to find the optimal lambda and calculate the residual in Algorithm 1?\n\nA: 1. For finding the relationship of optimal $\\lambda$ as a function of $r$ (i.e., the function in Alg. 1, line 12), we need to perform #types_of_noise $\\times$ #noise_levels forward processes for the training data (or a subset of the training data). Note that this only needs to be performed once and can be subsequently used for any test data.\n2. For predicting the labels on a test set, we need to perform 2 forward processes for each data in the test set, where the first is used to obtain the residual and optimal $\\lambda$, and the second is for obtaining final robust accuracy under the optimal $\\lambda$.\n\n>Q5: I would like to confirm whether it is z of a CSC-layer that will be the input of the next layers.\n\nA: Yes. The sparse code of the CSC-layer will be the input of the next layer. \n\n>Q6: Does FISTA algorithm always randomly initialize z for any CSC-layer in any iteration during the training? If it is, is it possible to initialize it with the previous leaned values for an image when the model sees it again? This may reduce the number of FISTA iterations.\n\nA: In our work, the FISTA always initializes z from zeros for any CSC-layer during the training. Since data augmentation is used during training, the training data changes in different epochs even with the same input image. Hence, initializing z from those in previous epochs will not offer any benefits.\n",
" This paper proposed the convolutional sparse coding layer (CSC-layer) to obtain performance comparable to standard ConvNets with better interpretability and stability. By conducting extensive experiments on CIFAR-10, CIFAR-100, and ImageNet, the model with CSC-layer has shown to achieve comparable performance as standard ConvNets and be more robust to data perturbation than the standard one. Strengths:\nThis paper proposed an interesting and effective idea for robust inference with the proposed convolutional sparse coding layer and achieved impressive performance on both image corruption and adversarial attack. \n\nThe weaknesses mainly lay in the cost of the training and robustness inference: \n1. This paper claims the fast training speed as one of its contributions. However, this needs careful comparison with its baselines. \n2. Although this method does not require modifying the training procedure like the existing methods to obtain robustness, it has additional costs to get the optimal $\\lambda$ for robustness inference. 1. In Table 1, the model size of SDNet-18 and SDNNet-34 on CIFAR-100 are much smaller than on CIFAR-10, which seems wired. \n2. With similar performance, the proposed method is much faster than its baselines. In Table 1, the proposed SDNet only replaces the first convolutional layer with CSC-layer while SCN is a multilayer sparse coding network. For a fair comparison, this paper may compare the time and memory consumption of a single sparse coding layer between those methods. \n3. Each layer of CSC-layer of SDNet-18 and SDNet34 needs unrolling two iterations of FISTA and more iterations will only slightly improve the performance. As SDNet-18 and SDNet-34 have only one CSC-layer for the input images, I’m curious whether it is this low dimension (3 channels) of input that make two iterations sufficient. On SDNet-18-All and SDNet-34-All, could you list the dimension of the input and output of each CSC-layers and their corresponding iterations used? \n4. What is the cost to find the optimal $\\lambda$ and calculate the residual in Algorithm 1? \n5. I would like to confirm whether it is $z$ of a CSC-layer that will be the input of the next layers. \n6. Does FISTA algorithm always randomly initialize $z$ for any CSC-layer in any iteration during the training? If it is, is it possible to initialize it with the previous leaned values for an image when the model sees it again? This may reduce the number of FISTA iterations. \n This paper introduced an appealing method to apply CSC-layer for robustness inference. It is great if this paper has a more detailed comparison of its time and memory for both training and inference with the other methods. ",
" This paper proposes an approach to incorporate convolutional sparse coding into deep neural networks. In particular, a convolutional layer of a ResNet is replaced with an implicit layer, referred to as convolutional sparse coding layer (CSC-layer), which outputs a sparse representation (a feature map) of its input given a (learned) dictionary of convolutional filters. The sparse representation for an input is computed using the FISTA algorithm. The experiments show that ResNet models whose first convolutional layer is replaced by such a CSC-layer can achieve similar or better classification performance on the CIFAR-10, CIFAR-100, and ImageNet datasets. Additionally, experiments suggest that ResNet models with a CSC-layer are more robust to different kinds of additive noise applied to the input when compared to a regular ResNet model. Strengths\n\n- The proposed method which combines sparse coding with deep neural networks is scalable to datasets such as ImageNet.\n\n- With only 2 iterations of FISTA, the proposed modification of the ResNet model is more robust to additive noise in the input than the vanilla ResNet architecture when evaluated on the task of classification.\n\n- The paper is organized and written clearly. \n 1. Have the authors considered visualizing the learned sparse dictionary convolutional kernels common in related literature (eg. [1])? I believe this would help with interpretability and understanding what the dictionary of convolutional kernels encodes.\n\n2. Typically, the FISTA algorithm requires hundreds of iterations to converge so my expectation is that the reconstructions $\\tilde{x} = \\mathcal{A}(z_*)$ with only 2 iterations are not high fidelity (e.g., terms of PSNR). This is supported by the visualization in Appendix B2 which shows that feature maps only encode contours or high-level information about the input. The authors mention that increasing the number of FISTA iterations can boost the classification performance a bit. Have the authors’ studied how increasing the number of FISTA iterations affects the model’s robustness to noise or can they provide intuition about it?\n\n3. My understanding is that only the first convolutional layer of ResNet-18 and ResNet-34 (the one closest to the input) is replaced by a CSC-layer. Is this correct or does “the first convolutional layers” (line 235) refer to the first convolutional layer of each ResNet block? \n\n4. How is the value of $\\lambda = 0.1$ used during training selected? What is the size of $C$ used in experiments, i.e. the number of sparse feature maps in $z$ (line 125)? How sparse on average are the feature maps output by FISTA when only 2 iterations are used with regularization coefficient $\\lambda = 0.1$?\n\n5. What magnitudes do levels 0-6 in Figure 2 correspond to for each type of noise? E.g. for Gaussian noise, what levels of noise are considered? Same for Tables 2 and 3.\n\n6. Have the authors studied how robust SD SCN model (from [3]) is to additive noise? Would be insightful to add it as a baseline in Tables 2 and 3.\n\n7. Are the authors planning to provide an implementation of the proposed framework?\n\n8. Nitpicks\n- Figures 2 and 3 would be easier to read if the font size in is bigger. \n\n[1] Kavukcuoglu, K., Sermanet, P., Boureau, Y.L., Gregor, K., Mathieu, M. and LeCun, Y., 2010. Learning convolutional feature hierarchies for visual recognition. Advances in neural information processing systems, 23.\n\n[2] Beck, A. and Teboulle, M., 2009. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1), pp.183-202.\n\n[3] Sun, X., Nasrabadi, N.M. and Tran, T.D., 2018, October. Supervised deep sparse coding networks. In 2018 25th IEEE International Conference on Image Processing (ICIP) (pp. 346-350). IEEE.\n The authors are upfront about the limitations of higher memory and inference costs of unrolling FISTA for a higher number of iterations. They could elaborate on the potential negative societal impacts, for example, the fact that the model can propagate any bias present in the dataset.\n",
" The paper proposes to replace the traditional convolutional layers with convolutional sparse coding (CSC) layers, claiming that such substitution adds interpretability and robustness to neural networks. The writing is generally good and easy to follow; however, the technical claims in the paper are not well supported by empirical evidence. I have three main concerns about the claims in the paper.\n\n**1. Interpretability**. The traditional convolutional layer performs a forward computation (the output is a linear combination of the inputs). In contrast, the convolutional sparse coding (CSC) layer performs a backward computation (the input is a linear combination of the outputs). It is not apparent why a backward computation is more interpretable than a forward one. In my opinion, it is not an individual layer that makes a neural network hard to interpret but the stack of these layers. While convolution layer and convolutional sparse coding are easy to interpret individually, using them in deep networks (with nonlinearities, normalization, etc.) is not.\n\n**2. Robustness**. The explanation of why the CSC layer is more robust is insufficient --- no math derivation is used to explain this concept. From the writing, it seems the Lipschitz constant decreases if the regularizer increases. In this case, the comparison to standard convolutional networks is not fair. Maybe the authors would also like to enforce the Lipschitzness of traditional convolutional networks (e.g., following https://arxiv.org/abs/1804.04368).\n\n**3. Computational Complexities**. It looks to me that the proposed layer is quite expensive. In the experiment, only one layer in ResNet is replaced by the proposed layer, and only two iterations are used in unrolling. And this already decreases the speed from 1000 to 900. I think a more comprehensive study on the relationship between accuracy, complexity, and iterations is needed when all layers are replaced. Not applicable."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"WeMOMstw1TUf",
"vapRlkFEwmp",
"I1RilkEObZ",
"7_7MvUX6HN",
"e_vbTgs8Jp1",
"JlPDGbyfO0j",
"96rTFKvFlxz",
"JlPDGbyfO0j",
"7CxjdvC29S",
"e_vbTgs8Jp1",
"nips_2022_INzRLBAA4JX",
"nips_2022_INzRLBAA4JX",
"nips_2022_INzRLBAA4JX"
] |
nips_2022_vt516zga8m | Cache-Augmented Inbatch Importance Resampling for Training Recommender Retriever | Recommender retrievers aim to rapidly retrieve a fraction of items from the entire item corpus when a user query requests, with the representative two-tower model trained with the log softmax loss. For efficiently training recommender retrievers on modern hardwares, inbatch sampling, where the items in the mini-batch are shared as negatives to estimate the softmax function, has attained growing interest. However, existing inbatch sampling based strategies just correct the sampling bias of inbatch items with item frequency, being unable to distinguish the user queries within the mini-batch and still incurring significant bias from the softmax. In this paper, we propose a Cache-Augmented Inbatch Importance Resampling (XIR) for training recommender retrievers, which not only offers different negatives to user queries with inbatch items, but also adaptively achieves a more accurate estimation of the softmax distribution. Specifically, XIR resamples items from the given mini-batch training pairs based on certain probabilities, where a cache with more frequently sampled items is adopted to augment the candidate item set, with the purpose of reusing the historical informative samples. XIR enables to sample query-dependent negatives based on inbatch items and to capture dynamic changes of model training, which leads to a better approximation of the softmax and further contributes to better convergence. Finally, we conduct experiments to validate the superior performance of the proposed XIR compared with competitive approaches. | Accept | The idea of more representative mini-batches sounds like a natural extension of the work done on stratified sampling. The reviewers were convinced the idea is both new and effective on real data. In particular, the discussion with mWFy clarified that this work is alternative way to explore the capacity of ranking optimization by leveraging the samples shared in the batch data by comparing it to other relevant work. | train | [
"L-7S3CTvD-i",
"AOZguDUkzaH",
"BN9OYbkKnTd",
"-JAcLZIoz1q",
"dciMvLyYiu",
"k4u3tCZNLCU",
"Hot3AqG2LFDu",
"qL6ggx0CkVr",
"Lzg_294QaTy",
"2trKDGSHCXa",
"D8Op6AL27UU",
"rExhbl85t1j",
"ndWegZgBPA",
"Ji-39VpOiau"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the rebuttal and appreciate the efforts. I think most of my comments/questions are addressed. Other reviews and related discussions also resolve my concern on the level of contributed. Changing the rating to 6.",
" Dear Authors:\n\nThank you so much for further feedback to clarify the concerns about the false negative issue and the details about the implementation of adaptive samplers like DNS, LambdaFM. It's good to see more justification that helps a lot to dig more insight about this work. Inbatch negative sampling raises unique challenges comparing to sampling from the whole corpus. This work presents an alternative way to explore the capacity of ranking optimization by leveraging the samples shared in the batch data. Most of my concerns have been addressed, even we have a difference of opinion on some issues. It's interesting to see that the performance of the proposed method has close relation to the batch size. I think it deserves more attention on designing efficient inbatch sampling method. After reading the rebuttal and checking out the code, I'd like to turn to accept this work.\n\nMinor Issues:\n1. Please explain what is the meaning of SSL in Section 4.1.2? It's better to give the complete name when it shows up at the first time. ",
" \n**Why sampling from the softmax distribution and why the softmax sampler better**\n\nAccording to the theoretical work[1] from Google research, softmax cross entropy loss is a bound on mean Normalized Discounted Cumulative Gain (NDCG) in log-scale when working with binary ground-truth labels. These theoretical results suggest that optimizing the softmax cross entropy loss is an indirect attempt at optimizing NDCG when given binary relevance judgments. Therefore, the recommender models trained with implicit feedback can achieve superior recommendation accuracy in terms of NDCG and other similar ranking metrics when optimizing the softmax cross entropy loss. Assuming user $u$ has interacted with item $i$, the softmax cross entropy loss is formulated as follows:\n$$\n\\ell(u,i)= - s_\\theta(u,i) + \\log\\sum_{j=1}^N \\exp (s_\\theta (u,j))\n$$\nwhere $s_\\theta(u,i)$ indicates the preference score of user $u$ for item $i$, and $N$ denotes the number of items. It is easily observed that the computational cost of the loss scales linearly with $N$. To motivate how to reduce the time cost, we first derive the gradient of the loss w.r.t to the parameter $\\theta$ as follows:\n$$\n\\nabla_\\theta \\ell(u,i)=-\\nabla_\\theta s_\\theta(u,i)+\\sum_{j=1}^N P(j|u) \\nabla_\\theta s_\\theta(u,j)\n$$\nwhere $P(j|u)=\\frac{\\exp(s_\\theta(u,j))}{\\sum_k \\exp(s_\\theta(u,k))}$ denotes the softmax distribution. *From this equation, we can see that in order to reduce the time cost of parameter update, we can sample some items from the softmax distribution $P(j|u)$ and then estimate the gradient with these samples. We can easily observe that the estimated gradient is unbiased and of a low variance[2]. The unbiasedness and low-variance gradient could make the optimization converge to optimum as fast as possible. Other samplers lead to either biased gradients or unstable gradients, so the converged solutions should be of lower quality than the solutions with the softmax sampler. The characteristics of the softmax sampler motivate why to sample from the softmax distribution.* Note that based on the gradient estimation, we can derive the sampled softmax loss[3,7]. The optimization of the sampled softmax loss for recommender system is now widely-studied by Ed. Chi's team[4,5], as well as Steffen Rendle[6] from Google, reporting the superiority over other baselines.\n\n\n\n**Recommendation method should give a large probability to the ground truth item**\n\nYes. It should. But we have to point out the differences between sampling and retrieval. Sampling items from the softmax distribution is used for training while retrieving items with large probabilities is used for top-k recommendation. Please note that sampling is stochastic while retrieval is deterministic. Retrieval only cares about the relative of probability to the other items while sampling concerns the absolute probability of items themselves. This is not contradictory.\n\n\n\nReferences\n\n[1] Bruch, S., Wang, X., Bendersky, M., & Najork, M. (2019, September). An analysis of the softmax cross entropy loss for learning-to-rank with binary relevance. In *Proceedings of the 2019 ACM SIGIR international conference on theory of information retrieval* (pp. 75-78).\n\n[2] Owen, A. B. (2013). Monte Carlo theory, methods and examples.\n\n[3] Bengio, Y., & Senécal, J. S. (2003, January). Quick training of probabilistic neural nets by importance sampling. In *International Workshop on Artificial Intelligence and Statistics* (pp. 17-24). PMLR.\n\n[4] Yi, X., Yang, J., Hong, L., Cheng, D. Z., Heldt, L., Kumthekar, A., ... & Chi, E. (2019, September). Sampling-bias-corrected neural modeling for large corpus item recommendations. In *Proceedings of the 13th ACM Conference on Recommender Systems* (pp. 269-277).\n\n[5] Yang, J., Yi, X., Zhiyuan Cheng, D., Hong, L., Li, Y., Xiaoming Wang, S., ... & Chi, E. H. (2020, April). Mixed negative sampling for learning two-tower neural networks in recommendations. In *Companion Proceedings of the Web Conference 2020* (pp. 441-447).\n\n[6] Blanc, G., & Rendle, S. (2018, July). Adaptive sampled softmax with kernel based sampling. In *International Conference on Machine Learning* (pp. 590-599). PMLR.\n\n[7] Bengio, Y., & Senécal, J. S. (2008). Adaptive importance sampling to accelerate training of a neural probabilistic language model. *IEEE Transactions on Neural Networks*, *19*(4), 713-722.",
" **How to deal with false negative**\n\nThe previous analysis tells us that we are optimizing NDCG in the training set. According to the machine learning generalization theory, NDCG can be guaranteed in the test set. Therefore, from this theoretical perspective, we can answer that our method essentially deals with false negative issues.\n\nFrom another perspective, according to our provided statistics, the sampling probability of positive items in the test set is extremely low. The sampling probability becomes smaller with the increasing number of items. Since the negative items are randomly sampled according to the probability, these positive items in the test set are rarely sampled as negatives. In other words, false negative cases are hardly encountered in the training process. Intuitively speaking, the occasional occurrences of these cases make little effect on the generalization performance. This intuition is also consistent with the aforementioned theoretical results.\n\n\n**Why adaptive sampling methods (DNS, LambdaFM) perform worse than methods (SSL, SSL-Pop)** **in some datasets**\n\nDNS and LambdaFM first sample some items according to the uniform distribution, within which the top items are more likely to be sampled as negatives. In other words, DNS and LambdaFM learn to rank positive items higher than **the top item** within a sampled item pool from the uniform distribution. Different form DNS and lambdaFM, when optimizing the sampled softmax loss (the loss $\\ell(u,i)=-s(u,i)+\\log \\sum_{j\\in \\mathcal{C} \\cup \\{ i \\}} \\exp s(u,j)$) with the uniform sampler, the model should try to rank positive items in front of **all items** in the sampled item pool from the uniform distribution. Therefore, optimizing the sampled softmax loss with the uniform sampler can be considered as adaptive as DNS and LambdaFM. Intuitively speaking, compared to DNS and LambdaFM, optimizing the sampled softmax loss with the uniform sampler can learn the model better, thanks to the following two reasons.\n+ Since more items are used for optimization, their related parameters are simultaneously updated in each batch. \n+ According to the previous analysis, it essentially optimizes NDCG in the training set, thus it probably outperforms DNS and LambdaFM in the test set with respect to NDCG and other ranking-oriented metrics.\n\nTherefore, when the uniform sampler is used, optimizing the sampled softmax loss usually leads to better recommendation performance.\n\nHowever, SSL and SSL-Pop in this paper are slightly different. In particular, SSL-Pop is the same as the one optimizing the sampled softmax loss with the popularity-based sampler with respect to bias correction, but they differ in whether sampling items within the batch or not. Though inbatch sampling only considers items in the batch, random shuffle of the training dataset in each epoch makes their difference negligible as long as the model is trained for a sufficiently large number of epochs. *Therefore, combined with the analysis in the previous part, SSL-Pop can perform better than DNS and LambdaFM*. SSL slightly differs in the removal of bias correction from SSL-Pop, so that SSL can not perform as well as SSL-Pop. However, due to inheriting other advantages of SSL-Pop, SSL can still probably perform better in some cases.\n\nWe have carefully checked the code, and did not discover bugs or issues. The source code of this project is also uploaded as supplementary materials. The reviewers are welcomed to check. Here, we also provide more implementation details about DNS and LambdaFM. In particular, within each batch of size $B$, each positive user-item pair is compared with $B$ negatives, each of which is obtained by the sampler of either DNS or LambdaFM (candidate item size is 5). The algorithms are trained by adam with a fixed learning rate of 0.01 within 100 running epochs where the embedding size is 32. The weight decay is tuned over {$0.01,0.001,0.0001,0.00001$}for DNS and LambdaFM with respect to the performance of NDCG\\@10 in the validation dataset.",
" Dear Authors:\n\nGreat thanks for the detailed response to the raised questions. After reading the reply, I decide not to change my score because of the following concerns:\n\n1. It's a good trial to explain the false negative sampling issue by checking the sampling probability of ground truth recommendations in the test set. However, the conclusion is counterintuitive to many pioneering research. It's not convincing enough because of using the exact probability but the relative value to the other items. It's a natural thing that the more number of items it is, the smaller the average softmax probability will be. Taking the Gowalla as an example, if we take a negative sample from a uniform distribution, the sampling probability is 0.0024%. According to the given statistics by the authors, average sampling probability from a softmax distribution is twenty times over the uniform distribution. Softmax distribution is a skew distribution, but it will become a smoothing distribution as the increasing number of items. Therefore, the statistics shown in the table can not provide convincing evidence to show that the proposed method can deal with the false negative issue. Moreover, if a recommendation method can not give a large probability to the ground truth item, how it's possible to achieve superior ranking performance?\n\n2. Negative samples can be sampled from different distributions [2]. In the second raised concern, I'm actually caring about why sampling from a softmax distribution in this work is superior to the methods which depends on as sampling distribution p(j |u,i) such as LambdaFM, WARP, also those methods from the variants of softmax distributions, like DNS, PRIS, Self-adversarial? While, it's difficult to find the answer from the response.\n\n3. Thanks for making further experiments to check the performance of more baselines. However, the experimental results shown in the table above have conflicts with many pioneering works. It's difficult to believe that an adaptive sampling methods (DNS, LambdaFM) performs worse than methods (SSL, SSL-POP) sampling from static distribution (uniform or popularity-based distribution) in the datasets like Echonest, Amazon.\n\nIn summary, this work should spend more words on clarifying the motivations and double check the setting of the implementation of the baselines to make the experimental results convincing enough to support the claims in this work.\n\nReferences:\n[1] One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities, NIPS 2016.\n[2] Negative Sampling for Contrastive Representation Learning: A Review, IJCAI 2022.",
" Are you convinced by the clarifications, in particular about the comparisons you asked for?",
" Thank you for your valuable reviews. Here are our responses to your comments.\n\n1. [False negatives] \n > **Comment1**: The proposed method ignores the false negative sample issue if sampling from the full softmax distribution. An item with large similarity does not have to be a hard negative sample.\n \n > **Comment2**: It should be essential capacity of negative sampling method to deal with false negative samples. Please justify how the proposed method can make it?\n\n False negative is indeed a challenging issue in negative sampling for recommender systems. In order to investigate the effect of false negatives in our samplers, we summarize the sampling probability of 'false negatives', which experimentally refer to items appearing in the testing data but not in the training data. Specifically, we calculate the average sampling probability in two ways. $$p_{global} = \\frac{1}{|D_{test}|} \\sum_{(u,j) \\in D_{test}} p(j|u)$$ where $p(j|u) = \\frac{\\exp(s(u,j))}{\\sum_{i\\in \\mathcal{I}} \\exp(s(u,i))}$ denotes the softmax probability calculated by the well-trained model. This one describes the sampling probability over all the interacted items in the testing data. $$p_{user} = \\frac{1}{M}\\sum_{u} \\sum_{(u,i) \\in D_{test}}p(j|u)$$ This average value represents the probability of sampling false negatives given a user.\n \n\n | DataSet | $p_{global}$ | $p_{user}$ |\n | :-----: | :----------: | :--------: |\n | Gowalla | 0.00050 | 0.00345 |\n | Amazon | 0.00015 | 0.00050 |\n \n As we can see in the table, the sampling probability for negative samples is relatively low. On the Gowalla dataset, the average probability of a negative sample being sampled is only 0.345% given a certain user. On the much larger dataset, Amazon, the probability of sampling false negatives decreases, indicating a lower likelihood of being sampled.\n \n\n2. [Focus on sampling from a softmax distribution]\n > **Comment1**: It deserves more words to compare with methods beyond estimating softmax distribution.\n \n > **Comment2**: Please give more discussion on the superiority of the proposed method comparing with the other kinds of sampling method which does not attempt to sample from a softmax distribution in the introduction section.\n\n There are various types of loss functions in general recommender systems and information retrieval, such as point-wise loss (binary cross-entropy loss), pair-wise loss (BPR loss), etc. However, recommender retrievers aim to recall a fraction of items for the subsequent stage, and are optimized under the log-softmax loss, which is more consistent with the rank-based metrics than the pair-wise and point-wise losses. When encountered with numerous candidate items, negative samplers provide an estimation of the original softmax distribution with a smaller number of items, significantly reducing time complexity. Thus, we concentrate primarily on samplers for estimating the softmax distribution.\n \n3. [Comparison with Suggested Baselines]\n > **Comment1**: At least, the typical method PRIS should be a strong baseline which also estimates the softmax distribution with importance sampling method.\n \n > **Comment2**: Could you please compare baselines like DNS, LambdaFM, PRIS?\n \n We perform experiments on the four datasets to compare with the suggested baselines, i.e., DNS, LambdaFM and PRIS. The three baseline samplers are designed to optimize the pair-wise loss. PRIS with uniform distribution and popularity-based distribution are denoted as PRIS(U) and PRIS(P), respectively. The batch size is set as 2048. To keep the sampled set the same size in this paper, each query is compared with 2048 sampled negatives. The learning rate is fixed as 0.01 and the weight decay is tuned over $\\{0.01, 0.001, 0.0001, 0.00001\\}$. The results (i.e., NDCG@10) are reported in the table below.\n\n | | DNS | LambdaFM | PRISU | PRISP | **BIR** | **XIR** |\n |:--------:|:------:|:--------:|:------:|:------:|:-------:|:-------:|\n | Gowalla | 0.1483 | 0.1435 | 0.1471 | 0.1474 | 0.1523 | 0.1543 |\n | Amazon | 0.0616 | 0.0631 | 0.0694 | 0.0703 | 0.0833 | 0.0877 |\n | Tmall | 0.0523 | 0.0511 | 0.0567 | 0.0570 | 0.0590 | 0.0658 |\n | Echonest | 0.0996 | 0.1168 | 0.1340 | 0.1343 | 0.1682 | 0.1842 |\n \n \t \t \n The results demonstrate that the proposed importance-resampling-based sampler, BIR, outperforms all the suggested baselines. The candidate set for sampling of the three baseline methods is the entire corpus, which is much larger than that in BIR. BIR still shows competitive performances with the limited sample set. Another difference lies in that the baseline methods, DNS, LambdaFM and PRIS, optimize the BPR loss whereas BIR is trained using the sampled softmax loss, which may result in a significantly different performance.",
" Thank you for your valuable reviews. Here are our responses to your comments.\n\n1. [Impact of Batch size]\n > **Comment1**: but a finite batch size theorem would make the paper much stronger because it would be far more practical. \n \n > **Comment2**: Discussion of the theorem. We do have the real-world data. Does the theorem lead to a significant difference under the usually used batch size?\n\n a. Theoretical Support on Finite Batch Size\n \n Theorem 3.1 states that the resampled items follow the softmax distribution given an infinite number of batch size. In Theorem 3.2, we examine the bias of the gradient to assess the effect of the finite batch size. The upper bound of the bias gets smaller as batch size increases, indicating that the estimated gradient is less biased.\n \n b. Experimental Results\n \n In Appendix B.3, we present the experimental results for the five datasets with varying batch sizes. BIR improves performance even with smaller batch sizes, indicating the feasibility of the proposed method for online systems.\n \n2. [Size of dataset]\n > **Comment**: For the experiments, one missing piece is the discussion of dataset sizes. The sizes are illustrated but are they large enough to fit the motivation behind the new method? Given the size, it seems naive global negative sampling can work.\n\n The evaluated datasets, Amazon and Echonest, are relatively large among public recommendation datasets in terms of the corpus size and the number of interactions. In previous studies for the inbatch sampling for recommender retrievers, i.e., G-Tower[1] and MNS[2], they conduct experiments on private real-world datasets. We also anticipate the availability of much larger datasets for future research.\n\n3. [Contribution - Algorithm or Theorem]\n > **Comment**: I want to understand the value proposition of this paper. Are we designing a practically useful algorithm or it's about some innovations behind the theorem? \n\n In this paper, we design a sampler with guaranteed theoretical provable and we follow the machine learning research paradigm to extensively verify the effectiveness from theoretical and experimental perspectives.\n \n4. [Modern Hardware & Limited Architecture]\n > **Comment**: The paper repeatedly mentioned about characteristics and contraints of modern hardware, which also guided the design of the new method. I suggest to explicitly talk about related details. Does it mean the new method is only useful for a particularly implemented system under certain architecture?\n\n We motivate the research problem through the characteristics and constraints of modern hardware, but our algorithm can be suitable for modern computing architecture. Current models are trained on devices with limited resources, such as GPUs and TPUs. In the industrial recommendation scenarios with a great number of items, the training process is performed through mini-batches on the current computing architecture. There are numerous features and complex models in realistic recommenders, while the adaptive sampler is inefficient over the entire item set, relying heavily on the inference results from the complex models.\n \n > **Comment**: It's now very clear about the usefulness of the cache. I feel the advantage is again a mixture of implementation and algorithm. I didn't find much discussion on this. It seems to me that the assumption is that getting the representation is slow.\n \n Representation is a part of the model inference process, and inferring the entire item corpus is inefficient. The designed cache mechanism can achieve a compromise between global and inbatch data.\n\n The cache is designed to reuse relatively informative samples from previous training epochs, with the probability of being cached based on the previous occurrence. As we can see in Figure 2, the occurrence distribution for generating the cache performs significantly differently than the item distribution (the popularity-based distribution), where the long-tailed items have a greater chance of being selected than previous native inbatch sampling.\n\nReference\n\n[1]Yi X, Yang J, Hong L, et al. Sampling-bias-corrected neural modeling for large corpus item recommendations// RecSys 2019\n\n[2]Yang J, Yi X, Zhiyuan Cheng D, et al. Mixed negative sampling for learning two-tower neural networks in recommendations// WWW 2020 Companion",
" Thank you for your valuable reviews. Here are our responses to your comments.\n1. [Picked Baselines]\n > **Comment**: There might be some weaknesses in the literature review and how they do relate to the baselines picked.\n\n This paper focuses on the inbatch sampling for training retrievers in real-world online systems, and we also discuss the various samplers in more general cases for recommender systems in section 2.2. The chosen baselines are proposed to conduct effective in-batch sampling strategies to enhance performance. All baseline methods use the other training items within the mini-batch as negatives to optimize the sampled softmax loss. SSL denotes the native sampled softmax loss while SSL-Pop corrects the sampling bias with the popularity weights. G-Tower estimates the item frequency under streaming data and MNS feeds the mini-batch with the additionally uniformly sampled items.\n2. [Discussion about bias]\n > **Comment**: the competition hypothesis such as popularity-based techniques are not necessarily better (which is shown) and potentially in many scenarios, they could be even worse. This is not a critique of the proposed technique but this aspect of the problem is not well studied in the paper.\n\n The popularity-based technique does not always produce better results because it is dependent on whether the ultimate optimization goal is influenced by the popularity-based distribution of data or labels. For training retrievers with the sampled softmax, the training data within the mini-batch is skewed and thus the debias of the sample weights depending on the popularity-based distribution can keep the actual sampling distribution and sampling weight consistent. Thus, as shown in the experimental results (SSL performs worst because it does not correct the bias of the inbatch sampling), the approaches considering the popularity-based distribution significantly improve performance.\n\n3. [Suitable Situations]\n > **Comment**: What would be the recommendation of the authors to readers on which techniques to use under what conditions?]\n\n In this paper, we analyze the inbatch sampling for training retrievers in nowadays online systems and suggest the importance-resampling-based method to sample query-dependent samples for better training.\n In a broader sense, our proposed sampler is applicable to the algorithms for optimizing ranking metrics, such as NDCG and RECALL, in recommender systems and retrieval systems, especially when the amount of data is large, the features are numerous, and the model is complex.\n\n4. [Future Work]\n > **Comment**: It might be good to discuss future work, particularly along the lines of candidate generation and retrieval for inference stages.\n\n Thank you for your advice on further discussing the retriever tasks. We will attempt to design a better theoretically provable cache to train the retrievers. ",
" Thank you for your valuable reviews. We will correct the typos in this paper and highlight the results in Table 1. What follows is our response to your comments. \n\n1. [More Analysis w.r.t. Batch Size and Popularity Distribution]\n > **Comment**: However, It would be more informative and instructive if the authors could provide a more thorough analysis of the impacts of batch size and popularity distribution.\n\n According to the theoretical results, with a larger batch size, the upper bound gets smaller, indicating that the estimated gradient is less biased. In terms of the popularity distribution, if the value of the popularity differs greatly, i.e., $\\max pop(\\cdot) / \\min pop(\\cdot)$ has a greater value, it would get a larger bias of the gradient. Thank you for your suggestion and these analyses will be added.\n2. [Space Complexity]\n > **Comment**: Is it possible to provide space complexity results, too?\n\n For BIR, it only takes an additional $O(\\vert B \\vert )$ space for each user to store the index of the sampled item. Thus, for each mini-batch $B$, it takes $O(\\vert B \\vert \\times \\vert B \\vert)$ space complexity. \n \n For XIR, it further takes $O(\\vert C \\vert)$ to save the cached items and $O(N)$ to count the occurrence of the items, where $C$ denotes the cache and $N$ denotes the number of items. Thus, the space complexity is $O(\\vert B \\vert \\times \\vert B \\vert + \\vert C \\vert + N)$.\n3. [Cache Update]\n > **Comment**: I suggest considering updating the cache in a less-frequent manner and also analyzing its possible impact on recommendation quality.\n\n Thank you for your advice to update the cache less frequently to reduce the time cost.",
" This paper studies an important research problem of in-batch negative sampling for recommender systems, which aims to efficiently train recommender retrievers (also known as recall-stage models) by choosing the items in the mini-batch as negatives to estimate the softmax function. This is also highly practical and thus widely adopted in today’s industrial recommender systems. The authors first analyze and discuss two main weaknesses of existing solutions. One is a huge bias of the approximated gradient from the full softmax with the in-batch sampling. The other is a query-independent sample selection strategy that is less effective for convergence of the recommendation quality. To tackle the above two problems, the authors propose to simultaneously eliminate approximation bias caused by in-batch sampling and improve sampling quality by favoring hard negative samples. Specifically, the authors propose to resample the batch items according to the modified softmax weight over the mini-batch, which offers query-dependent negatives and theoretically provides a more accurate estimation of the full softmax. Moreover, the sampling process is augmented with hard negative samples by caching frequently sampled items. The designed Cache-Augmented In-batch Importance Resampling (shorted as χIR) significantly outperforms state-of-the-art baselines by 3.81%~17.12% on five public real-world datasets. Strengths:\n1. The studied problem is important and interesting. In-batch negative sampling is one of the most prevalent techniques for recall-stage recommendation models in real-world applications. Besides, it is more challenging than the traditional negative sampling problems.\n2. The proposed method is novel. The cooperation of importance resampling and hard sample caching can address the two remaining issues of existing in-batch sampling solutions. It is the first work to achieve query-dependent and hard in-batch negative sampling simultaneously.\n3. The two main designs are technically sound. First, a non-asymptotic bound of the proposed importance in-batch resampling method is derived and analyzed rigorously. Second, the designed cache constantly updated by frequently sampled items can capture more informative negatives.\n4. The authors conduct extensive experiments. State-of-the-art methods are compared on five real-world datasets, and the results show a significant improvement achieved by the proposed method. The ablation studies, applicability studies, and hyper-parameter studies are solid. \n5. Overall, the presentation is nice. I enjoy the reading, and I believe most readers of the NeurIPS community will, too.\n\nWeaknesses:\n1. Table 2 presents many results of the performance comparison. But, there is too much information in a single table. I suggest bolding and underlining the best and second best, respectively.\n2. I appreciate the derivation and analysis of the non-asymptotic bound (Theorem 3.2). However, It would be more informative and instructive if the authors could provide a more thorough analysis of the impacts of batch size and popularity distribution.\n3. A few typos. e.g., “Assum that the gradients of the logits” in Line 162. And y-label for Figure 3(c). 1. The authors have analyzed time complexity in Sec. 3.3 and plotted empirical results in Figure 3(c). Is it possible to provide space complexity results, too?\n2. Since updating the cache and occurrences takes O(N) time complexity, and item numbers can be enormous in industrial recommender systems, I suggest considering updating the cache in a less-frequent manner and also analyzing its possible impact on recommendation quality. Yes.",
" This work designs an inbatch negative sampling method for improving both effectiveness and efficiency of contrastive item recommendation task. Strengths:\n1. The proposed method is well formalized with theoretical support.\n2. The presentation is overall good enough.\n3. Experiments are conducted on large-scale datasets to evaluate the performance of the proposed approach.\n\nWeakness:\n1. The proposed method ignores the false negative sample issue if sampling from the full softmax distribution. An item with large similarity does not have to be a hard negative sample. \n2. This work mainly focuses on how to improve the sample efficiency from softmax distribution. However, it does not present the superiority of the proposed method comparing to previous works that are not based on importance sampling from softmax distribution, for example, dynamic negative sampling (DNS, Weinan Zhang et al. SIGIR 2013), lambdaFM (CIKM 2016), PRIS (The WebConf 2020), AOBPR (WSDM 2014) etc. It deserves more words to compare with methods beyond estimating softmax distribution.\n3. The baselines are not fully compared to validate the superiority of the proposed method. At least, the typical method PRIS should be a strong baseline which also estimates the softmax distribution with importance sampling method. 1. Could you please compare baselines like DNS, LambdaFM, PRIS?\n2. It should be essential capacity of negative sampling method to deal with false negative samples. Please justify how the proposed method can make it?\n3. Please give more discussion on the superiority of the proposed method comparing with the other kinds of sampling method which does not attempt to sample from a softmax distribution in the introduction section. Yes",
" The paper discusses the drawback of traditional negative sampling methods for the retrieval task for general recommender systems, and then proposes a new method which samples negative data points specifically for each user/query input, and illustrates why it could better approximate the gradient during model training. Experiments are provided to support the advantage of the proposed method. Strength:\n\n1. The problem itself is important. For extremely large scale retrieval system, the negative sampling is the key for training. The authors also capture the critical drawbacks of existing negative sampling method.\n\n2. The bias reducing theorem is a highlight of the paper.\n\n\nWeakness:\n\n1. I also want to mention theorem 3.1. Thought I don't know how difficult it is, but a finite batch size theorem would make the paper much stronger because it would be far more practical. In reality, other factors could dominate the training performance which the benefit of the new method cannot be revealed when the batch size is not big enough.\n\n2. For the experiments, one missing piece is the discussion of dataset sizes. The sizes are illustrated but are they large enough to fit the motivation behind the new method? Given the size, it seems naive global negative sampling can work. 1. I want to understand the value proposition of this paper. Are we designing a practically useful algorithm or it's about some innovations behind the theorem? In its current shape, the paper seems a mixture of both. As a result, the level of contribution cannot be well estimated. For example, if it's about theoretical innovation, then the techniques behind the proof should be highlighted; if it's about practically useful algorithms, then I don't think there are enough material in the experiments and the discussion of real-world IR system.\n\n2. The paper repeatedly mentioned about characteristics and contraints of modern hardware, which also guided the design of the new method. I suggest to explicitly talk about related details. Does it mean the new method is only useful for a particularly implemented system under certain architecture?\n\n3. Discussion of the theorem. We do have the real-world data. Does the theorem lead to a significant difference under the usually used batch size?\n\n4. It's now very clear about the usefulness of the cache. I feel the advantage is again a mixture of implementation and algorithm. I didn't find much discussion on this. It seems to me that the assumption is that getting the representation is slow. Though not very explicitly mentioned, the new method still has bias for the estimate of the gradient. This can be considered as the limitation.",
" Large-scale two-tower recommendation systems are the norm in the industry. One of the biggest challanges in model training is the growing data set sizes. This even makes constructing the training set somewhat tricky--often pushing the modelers randomly sample their data set. This is sub-optimal. The paper proposes a new model training process that uses an in-batch importance sampling in distributed systems to train large-scale recommender systems. The proposed techniques have been viewed in two segments--1) importance sampling 2) cache-augmentation. The authors compare their methods with appropriate and standard baselines and analyze the parameters to inform future users. A) Strengths\n\n-The paper touches on a simple but essential step of model training that is often neglected and, via a simple idea, dramatically improves model performance as well as provides the ability to increase model complexity or gain training time potentially.\n\n-The authors demonstrate that a modern ML production system should think about every single step end-to-end holistically and use the power of systems and algorithms jointly to optimize the whole process. \n\n-The paper is written well and rigorously explained, and enough technical details are provided. Given the space, the experimentation depth (both baseline comparison and analysis of algorithms) and findings/learnings are pretty detailed.\n\nB) Weaknesses (I don't have any major concerns. Just nit-picking):\n\n-There might be some weaknesses in the literature review and how they do relate to the baselines picked. \n\n-Not enough discussion on how these techniques would provide biases and affect the populations/society. \n\n-It might be good to discuss future work, particularly along the lines of candidate generation and retrieval for inference stages. I suspect several such learning could inform the research in this direction. Along the lines of relating the literature to baselines and final learnings/findings: \nWhat would be the recommendation of the authors to readers on which techniques to use under what conditions? \n There could be more such discussion. The importance of sampling, particularly the one done through learned embeddings, can result in significant biases in the systems. Depending on the learned embeddings, any such bias could be heavily boosted in certain scenarios. On the other hand, the competition hypothesis such as popularity-based techniques are not necessarily better (which is shown) and potentially in many scenarios, they could be even worse. This is not a critique of the proposed technique but this aspect of the problem is not well studied in the paper. \n\nOn the other hand, I understand the fact that this itself could be a separate paper. However, the paper proposes a weighting technique alternative to random sampling, this topic is at the heart of the problem. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
5
] | [
"ndWegZgBPA",
"-JAcLZIoz1q",
"dciMvLyYiu",
"dciMvLyYiu",
"Hot3AqG2LFDu",
"rExhbl85t1j",
"rExhbl85t1j",
"ndWegZgBPA",
"Ji-39VpOiau",
"D8Op6AL27UU",
"nips_2022_vt516zga8m",
"nips_2022_vt516zga8m",
"nips_2022_vt516zga8m",
"nips_2022_vt516zga8m"
] |
nips_2022_SZDqCOv6vTB | Deep Attentive Belief Propagation: Integrating Reasoning and Learning for Solving Constraint Optimization Problems | Belief Propagation (BP) is an important message-passing algorithm for various reasoning tasks over graphical models, including solving the Constraint Optimization Problems (COPs). It has been shown that BP can achieve state-of-the-art performance on various benchmarks by mixing old and new messages before sending the new one, i.e., damping. However, existing methods on tuning a static damping factor for BP not only is laborious but also harms their performance. Moreover, existing BP algorithms treat each variable node's neighbors equally when composing a new message, which also limits their exploration ability. To address these issues, we seamlessly integrate BP, Gated Recurrent Units (GRUs), and Graph Attention Networks (GATs) within the massage-passing framework to reason about dynamic weights and damping factors for composing new BP messages. Our model, Deep Attentive Belief Propagation (DABP), takes the factor graph and the BP messages in each iteration as the input and infers the optimal weights and damping factors through GRUs and GATs, followed by a multi-head attention layer. Furthermore, unlike existing neural-based BP variants, we propose a novel self-supervised learning algorithm for DABP with a smoothed solution cost, which does not require expensive training labels and also avoids the common out-of-distribution issue through efficient online learning. Extensive experiments show that our model significantly outperforms state-of-the-art baselines. | Accept | 4 knowledgable reviewers reviewed the paper, 3 of them recommending weak acceptance, 1 borderline rejection. The reviewers engaged with the authors and a discussion among the reviewers took place. The reviewers appreciate the considered problem, the novelty of the proposed approach and the reported performance improvements. At the same time, there are concerns regading the theoretical justifcation of the method, relation to existing work, and comparison with other existing methods (lacking baselines and pushing the baselines to the limit). There was a discussion regading the need for a theoretical justification and I side more with the reviewers which argue that such a justification is not absolutely necessary -- nevertheless, more motivation and intutition about the proposed approach should still be provided. In summary, the paper is viewerd borderline, which I agree with, but I think there are some relevant contributions which could be interesting to the community. Hence I am recommending acceptance of the paper but strongly encourage the authors to carefully consider all comments and suggestions which came up in the reviews and discussions with the reviewers when preparing the final version of their paper. | test | [
"iQNd38p_4CK",
"krlgNvO2WR",
"kwb75orZXpg",
"EXWHe1EzI-",
"iZeJF64KA9C",
"h8hq0MRGFwV",
"051ThR0SfJU",
"Ttm6q2-1-ZM",
"SlA8ZsfbHl",
"nJJ4X0k-_Zi",
"hEtc543WXkg",
"bKU3YrpIsCM",
"2I5yS8kyQUN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the very detailed and helpful response. I don't have any more questions now.",
" Thank you the authors for answering my comments and for making clear the changes on the paper.\nI've modified original rating accordingly\n",
" Thanks for the further clarifications. I appreciate the efforts in providing a comprehensive review of related works to address my concerns. As for theoretical justification, I agree that it could be challenging to provide theoretical analysis of the proposed algorithm. But it is important to justify why and how dynamic damping factors and message weights can improve the performance and to motive the usage of neural networks. I would take the authors' further clarifications into my consideration to reach my final assessment.\n",
" We sincerely thank the reviewer for the further comments. We realize that it is very challenging to give rigorous theoretical analysis of our DABP due to the inherent complexity induced by dynamic hyperparameters and neural networks [1, 2, 3], and we have instead demonstrated the effectiveness of our DABP through extensive experiments in our paper, where we have already noted in our previous responses. **However, we still want to try our best to address your concerns by giving some high-level intuitions about our method from the learning perspective as follows.**\n\nSince our model is self-supervised by the smoothed cost which can serve as a surrogate for the true solution cost in each BP iteration, the negative gradients of the smoothed cost w.r.t. probability distribution reflects the improving direction (cf. E\tq. (6)). Such gradients are propagated backward through the chain rule to the beliefs, BP messages, **neighbor weights** and **damping factors** and finally, our DABP model parameters. Therefore, by adjusting the parameters of our model according to the improving direction through a suitable optimizer (e.g., Adam), we can improve neighbor weights and damping factors, alter the composition strategy of BP messages, and hence result in beliefs that induce a better solution (and smoothed cost). \n\nRegarding the related work, **we have further surveyed comprehensively and compared our method with a wide range of related techniques for improving loopy BP**, including: (1) traditional methods like relaxation, alternatively directed message-passing, damping, etc.; (2) LP-based methods like MPLP, TRBP, Fractional BP and Norm-Product and (3) neural-based methods that learn to modify the BP messages, e.g., BPNN, FGNN and NEBP. Please kindly refer to the updated related work section of our paper. \n\n> [1] Cohen, Liel, Rotem Galiki, and Roie Zivan. \"Governing convergence of Max-sum on DCOPs through damping and splitting.\" Artificial Intelligence 279 (2020): 103212.\n\n >[2] Jonathan Kuck, Shuvam Chakraborty, Hao Tang, Rachel Luo, Jiaming Song, Ashish Sabharwal, and Stefano Ermon. Belief propagation neural networks. In NeurIPS, pages 667–678, 2020.\n\n> [3] Victor Garcia Satorras and Max Welling. Neural enhanced belief propagation on factor graphs. In AISTATS, pages 685–693, 2021.\n",
" Thanks for the authors’ responses. All my concerns are responded, and I appreciate the efforts the authors spend in answering my questions. Unfortunately, my major concerns are not fully addressed by the responses. \n\nMy major concern still lies in the technical originality and theoretical justifications. The authors claim that the technical originality lies in proposing a self-supervised DNN-based BP whereby deep learning is leveraged to configure BP automatically. However, neural networks are employed more in a heuristic way without sufficient theoretical justifications (or at least some theoretical intuitions) on how the proposed damping factors and dynamic weights can improve upon vanilla BP. \n\nIn addition, it is important to contrast the proposed method with existing BP algorithms systematically to highlight the motivation and importance of the proposed damping factors and dynamic weights. Only comparing to two algorithms (TRBP and Fractional BP) as provided in the response is not sufficient.\n\nBased on my justifications above, I will keep my original rating. \n",
" We thank Reviewer rbQp for insightful comments and helpful feedback on our work. We address Reviewer rbQp's suggestions and respond to specific comments below\n\n**1. The notations in BP formalization**\n\n**Response:** We thank the reviewer for suggestion of refining the belief propagation formalization. We have revised the notation and added the missed descriptions.\n\n**2. Regarding the scaling factor in Eq. (6)**\n\n**Response:** The purpose of introducing the scaling factor $|N_i|-1$ is to keep the magnitude of the messages in our DABP the same as BP and DBP, since we have required the neighbor weights to sum to 1 in Eq. (6). Intuitively, each message from the neighboring function-node contributes 1 unit in the new variable-to-function message in BP (resp. $1-\\lambda$ unit in DBP), and thus all the messages from neighboring non-target function-nodes contribute $|N_i|-1$ units in BP (resp. $(1-\\lambda)(|N_i|-1)$ units in DBP). With the scaling factor of $|N_i|-1$, each message in our DABP contributes $(|N_i|-1)w^t_{m\\to i}(\\ell)(1-\\lambda^t)$ and overall contribution is $(|N_i|-1)(1-\\lambda^t)\\sum_{f_m\\in N_i\\backslash{f_\\ell}}w^t_{m\\to i}(\\ell)=(|N_i|-1)(1-\\lambda^t)$, which is the same as BP if $\\lambda^t=0,$ and otherwise, DBP.\n\nBesides, it is noteworthy that new estimation cannot bias the computation, since the old estimate was also scaled by $|N_i|-1$. By the linearity of multiplication, the weights of the new estimation and the old estimation are summed to 1.\n\n**3. Regarding the description of Toulbar2 in experimental result**\n\n**Response:** By saying “Toulbar2 performs poorly”, we essentially refer to the fact that Toulbar2 fails to find high-quality solutions even if a very generous runtime budget (i.e., 20min) is given.\n\n**4. Regarding the experimental results of Toulbar2 and MBE on WGCPs**\n\n**Response:** Since the domain size in WGCP experiments is set to a relatively small number (i.e., 5), MBE with i-bound of 9 can perform variable elimination quickly. For the performance of Toulbar2 on WGCPs with |$X$|=60, we have double-checked the correctness of the results. The reasons could be that 1) the domain size of each variable is relatively small, making the problems easier to be solved by a search-based solver like Toulbar2, and 2) the constraint functions are highly-structured (i.e., most entries are close to 0), making it easier to get a high-quality upper bound and therefore lead to more efficient pruning. We have included the discussion in line 533-536.\n\n**5. Tuning the damped factor of DBP(-SCFG)**\n\n**Response:** Please refer to our response 1 to Reviewer jt6S ",
" We thank Reviewer 1zWz for insightful comments and helpful feedback on our work. We address Reviewer 1zWz's suggestions and respond to specific comments below\n\n**1. Regarding the technical originality**\n\n**Response:** The goal of this work is not just to simply leverage attention to weight messages but rather to come up with a novel self-supervised DNN-based BP, DABP, for solving COPs by seamlessly integrating BP, Gated Recurrent Units (GRUs), and Graph Attention Networks (GATs) within the message-passing framework to reason about dynamic weights and damping factors for composing new BP messages. \n\nWe want to highlight that, different from the work of Zhang et al. which targets deep representation learning, we do not aim to advance graph representation learning but rather to leverage deep learning to configure BP automatically for solving COPs. Collectively, building on top of GRUs and GATs, we aim to infer the optimal damping factors and message weights of BP by capturing the message dynamics with GRUs and embedding the factor graphs with GATs; Our model is then trained with a novel self-supervised learning loss. Of course, one can replace GATs in our model with a more advanced graph embedding technique such as the method of Zhang et al. to further improve our model. \n\n> Zhang, Li, et al. \"Dynamic graph message passing networks.\" In: CVPR 2020.\n\n**2. Regarding the significance of DABP**\n\n**Response:** Our DABP’s computing time mostly depends on the number of rounds to run (cf. the hyperparameter “R” in line 2 of Algorithm 1), and DABP can be faster than all baselines by setting a smaller R. In our experiments, by setting R=5, DABP always finds better solutions than the strongest baseline DBP-SCFG within comparable computing time (cf. Table 1). In your example, on small-work networks with |X|=100, DABP (R=5) achieves a cost of 25.67 which is better than DBP-SCFG (cost = 26.13), but DABP only takes 1m56s while DBP-SCFG takes 2m10s. Therefore, one can vary R according to computing budgets, and we find that a small R is often enough. \n\nThe proposed training objective (Eq. 14) may seem computationally expensive; it can be implemented effectively with GPUs in parallel since each component can be computed independently. \n\n**3. Regarding theoretical justifications**\n\n**Response:** Regarding the theoretical justifications of dynamic damping factors and message weights, these are outside the scope of this piece of research work. Because the theoretical justifications for using damping factors to improve the performance of BP is a challenging open question (cf [c]), investigating the theoretical justifications of dynamic damping factors and message weights could be even harder. \n\n> [c] Cohen, Liel, Rotem Galiki, and Roie Zivan. \"Governing convergence of Max-sum on DCOPs through damping and splitting.\" Artificial Intelligence 279 (2020): 103212.\n\nWe want to highlight that the goal of this work is to propose a deep learning framework to configure and generalize traditional COP solvers such as BP to reduce the human labor to finetune such solvers’ hyperparameters. Intuitively, our model benefits from inferring the optimal damping factors and neighbor weights for each iteration automatically by seamlessly integrating BP and DNNs, and thus, we have more fine-grained control for composing new messages to trigger effective exploration. We have conducted extensive experiments to evaluate the effectiveness of our model.\n\n**4. Regarding the error bound of smoothed cost**\n\n**Response:** We thank the reviewer for the suggestion of analyzing the error bound. We have added a theorem to analyze the error bound induced by smoothed cost (Theorem 1) in Sect. 3.2.\n\n**5. Regarding the missing related work**\n\n**Response:** We thank the reviewer for suggesting the 2 related papers. For the traditional BP-based optimization solvers for COPs, DBP-SCFG [c] is currently the strongest baseline, so we only compare with it. \n\nOn the other hand, TRBP [a] and Fractional BP [b] are another class of methods that cope with non-convergence of BP by efficiently solving an LP relaxation of the combinatorial problem, then selecting value assignments based on the solution to the LP. But, they rely on predetermined and static weights, while we automatically learn the best weight through our self-supervised objective, eliminating the need for prior domain knowledge.\n\nWe have discussed and cited these 2 related papers in our revised paper. \n\n**6. Regarding computational overhead of the smoothed cost**\n\n**Response:** For each constraint function, we only need to sum up the combinations of variables in its scope, rather than the combinations of all variables. Besides, by leveraging the powerful vectorized computation offered by modern GPUs, the overhead of computing the smoothed cost can be greatly reduced. Therefore, our DABP can scale up to large instances. Please also refer to our response 5 to Reviewer DKJk.",
" We thank Reviewer DKJk for insightful comments and helpful feedback on our work. We address Reviewer DKJk’s suggestions and respond to specific comments below.\n\n**1. Regarding the memory footprints**\n\n**Response:** We thank the reviewer for the suggestion of including CPU and GPU memory footprints. We have included the GPU memory footprint in Appendix C.6. Besides, DABP has a similar CPU memory footprint of 4.3GB on all test cases. Therefore, we choose to not report the CPU memory footprint separately.\n\n**2. Regarding the memory bound of MBE**\n\n**Response:** We thank the reviewer for the question and for suggesting the GPU implementation of bucket elimination. We set the memory bound based on our computational resources. We have made this point clear in lines 496-497. For GPU implementation, we would like to note that the bottleneck of solution-quality performance in MBE mainly depends on the available memory which directly determines the maximum tables it can process, rather than intractable runtime. Besides, the code provided by the author of [1] is no longer available, which hinders the direct comparison with [1].\n> [1]Bistaffa, Filippo, Nicola Bombieri, and Alessandro Farinelli. An efficient approach for accelerating bucket elimination on GPUs. IEEE Transactions on Cybernetics 47.11 (2016): 3967-3979.\n\n**3. Regarding the caching and memory consumption of Toulbar2**\n\n**Response:** We set HBFS [2] as the tree-search algorithm for Toulbar2, which does not use caching during the solving process. We have also tested its BTD counterpart (i.e., BTD-HBFS) which performs caching during the search, but the performance is worse than HBFS.\n> **[2] D Allouche, S de Givry, G Katsirelos, T Schiex and M Zytnicki. Anytime Hybrid Best-First Search with Tree Decomposition for Weighted CSP. In CP, pages 12-28, 2015.**\n\nThe following table presents the memory usage of Toulbar2 (in MB).\n\n| | Random COPs | WGCPs | SF Nets | SW Nets |\n|-----------|-------------|-------|---------|---------|\n| \\|$X$\\|=60 | 28.69 | 30.33 | 31.02 | 31.28 |\n| \\|$X$\\|=80 | 50.36 | 55.26 | 36.91 | 36.37 |\n| \\|$X$\\|=100 | 55.47 | 56.43 | 51.48 | 46.24 |\n\n**4. Regarding the number of GPUs used in the experiments**\n\n**Response:** In all our experiments, we only use one GPU with 24GB of memory.\n\n**5. Regarding the scalability of DABP**\n\n**Response:** To test the scalability limit of our DABP, we generate random COPs with a graph density of 0.05 and a step size of 50 variables, starting the problems with 150 variables. The results show that our algorithm can scale up to the problems with 300 variables. The following table presents the GPU memory footprint (in GB) for each experiment.\n| \\|$X$\\| | 150 | 200 | 250 | 300 |\n|----------|------|-------|-------|-------|\n| GPU memory footprint | 5.86 | 10.23 | 16.58 | 21.94 |\n",
" We thank Reviewer jt6S for insightful comments and helpful feedback on our work. We address Reviewer jt6S’s suggestions and respond to specific comments below.\n\n**1. Tuning the damped factor of DBP(-SCFG)**\n\n**Response:** We thank the reviewer for the suggestion of tuning the damping factors. In our experiments, we set the damping factor to 0.9 for DBP(-SCFG) according to the recommendation of [1], since we use the similar benchmark problems (e.g., random COPs, scale-free, etc.). We have varied the damping factor from 0.5 to 0.9 with a step size of 0.1, and the results can be found in Appendix C.3 (see Fig. 5-7). It can be observed that our DABP still outperforms DBP and DBP-SCFG, while DBP and DBP-SCFG require significantly longer runtime if we tune the damping factor (see Fig. 8).\n\n> [1] Liel Cohen, Rotem Galiki, and Roie Zivan. Governing convergence of max-sum on DCOPs through damping and splitting. Artificial Intelligence, 279:103212, 2020\n\n\n**2. The significance of DABP**\n\n**Response:** To the best of our knowledge, DBP-SCFG is currently the strongest approximate solver for large-scale COPs. Therefore, improving DBP(-SCFG) by 1.46%-27.5% without incur much additional runtime overhead is a non-trivial advance over SotA. In fact, if we use 5 times of restart, our DABP’s runtime performance is comparable to (or even better than) the one of DBP(-SCFG). Finally, our DABP has a nice anytime property, which can return the best solution found so far within the user-specified runtime budget.\n\n**3. Regarding the model for solving COPs**\n\n**Response:** For each instance to be solved, we always train the model from scratch, i.e., start from an untrained model with initial weights, and continuously improve it on the current instance via online-learning. In other words, there is no pretraining phase and therefore no training example is required. We have made this point clear in row 174-176.\n\n**4. Regarding the reported runtime**\n\n**Response:** The reported runtime for DABP covers all the subprocedures, including learning, inference and BP message-passing.\n\n**5. Regarding the complexity and \"slightly flexible\" DBPs**\n\n**Response:** The parameters of our DABP comprise of $O(T|X|d^2)$ heterogenous neighbor weights and $O(T|X|d)$ heterogenous damping factors. Therefore, to demonstrate the necessity of the parameters, we conduct ablation study by assuming the equal neighbor weights (which is the major source of complexity) and reasoning only about $O(T|X|d)$ heterogenous damping factors, or one average damping factor.\n\nThe results can be found in Appendix C.5. It can be observed that the performance degenerates if we do not reason about neighbor weights, and the gap is widened if we only reason about one average damping factor, which highlights the necessity of our proposed heterogenous damping factors and neighbor weights. ",
" The authors augment the max-sum algorithm with learning in order to solve COPs. The augmentation involves the used GRUs and GATs to produce an alternative, heavily parametric version of max-sum. The cost function can be expressed in terms of one-hot encodings of the solutions for each variable. A softened version of this cost is obtained by using the soft-max of the computed beliefs instead. The parameters of the GRUs and GATs can now be chosen to optimize this softened cost, thus directly using the cost to drive the learning, and not some golden target beliefs. Strengths:\n\n- Technically correct\n- Clear presentation\n- Interesting problem \n- Does not rely on training data with correct solutions; thus potential to solve COPs that aren't solvable by other methods\n\nWeaknesses:\n\n- The competitor DBP is not pushed hard enough, for instance by exploring more damping factors\n- The small advantage obtained wrt DBP doesn't seem to justify all the additional machinery deployed. Also, DBP doesn't require training, can be deployed directly for any new architecture.\n 1. Can you clarify if a single trained DABP is used in _all_ the experiments, or if a different one is used for each model type (or other approach)? Also, how many examples is it trained on? Or is it \"trained\" separately on each new instance to solve? All this wasn't clear to me from reading the paper.\n\n2. The provided time for DABP is the time to run inference only, or does it include the \"learning\" time?\n\n3. Why not test DBP with other damping parameters?\n\n4. Between DBP (essentially one parameter) and DABP (many many parameters), there seems to be a world of \"slightly flexible\" DBPs. Given the slim advantage of DABP over DBP, is DABP really the minimum amount of sophistication necessary to improve on DBP?\n\nMinor comment:\n- nature choice -> natural choice Described above.",
" The paper proposes a new belief propagation algorithm for COPs (constraint optimization problems), called DABP (deep attentive BP). DABP increases the granularity of the damping factors and neighbor weights by allowing them to be dynamic and specific for every variable node. The dynamic damping factors and neighbor weights are automatically inferred for each iteration by DNN, using GRUs (gated recurrent units) and GATs (graph attention network) and a multi-head attention layer. DABP uses novel smooth self-supervised learning loss, therefore avoiding the need for expensive training labels. The experimental evaluation shows that DABP achieves a high convergence rate and performs better than some SOTA baselines. Strengths:\nThe paper combines and refines several existing schemes, while adding novelty items. The damping scheme is refined as much as possible by allowing node specific values for damping factors and neighbor weights. These values are learned for each instance, and the use of GRUs prevents the gradient vanishing issue. A smoothed cost is proposed as a surrogate objective, allowing the learning to happen in the absence of labels. \n\nThe presentation is clear, considering the need for relatively involved notation. The Figure 2 and the Algorithm 1 description are very useful. \n\nWeaknesses:\nThe significance of the proposal depends a lot on the experimental evaluation, which is quite extensive, but could benefit from some more clarifications, see questions. The size of the networks is still small, how far can DABP scale?\n 1) It would be useful to include the memory consumption of the algorithms. In particular, how much memory does DABP use, main memory and GPU memory? Memory availability can impact the performance of some of the algorithms.\n\n- why did you limit MB i-bound to 9? Am I correct to assume that domain size 15, and i-bound 9 reaches the main memory limit? \nJust as a reference, for a more fair comparison, there are MB implementations that exploit GPU, e.g.:\nF. Bistaffa, N. Bombieri and A. Farinelli, \"An Efficient Approach for Accelerating Bucket Elimination on GPUs,\" in IEEE Transactions on Cybernetics, vol. 47, no. 11, pp. 3967-3979\n\n- My understanding is that Toulbar2 is a search based algorithm. Is Toulbar2 using caching during search? Do you know its memory usage?\n\n2) How many GPUs do you use in your setup? Do they have 24GB memory each? \n\n3) The networks are relatively small. How does DABP scale for bigger networks? N/A",
" This paper proposed a deep attentive belief propagation (DABP) model for solving constraint optimization problems (COPs). The proposed DABP generalizes over damping belief propagation by considering dynamic damping factors. In addition, DABP considers different neighbor weights for optimal message composition. Both damping factors and neighbor weights are estimated via neural network units. DABP is trained in a self-supervised manner and the training loss is defined based on a smoothed surrogate of the original objective of COPs. Extensive experiments are performed to demonstrate the effectiveness of the proposal DABP. ## Strengths: \n**Clarity:** The paper is overall well written. The paper is easy to follow. \n\n**Novelty:** A smoothed cost objective is defined based on COPs. It is an interesting idea to leverage the smoothed cost objective for training without requiring annotations. Leveraging neural networks for improving classical well-established algorithms becomes an attractive topic recently. This paper introduces attention mechanism for improving belief propagation on factor graph, which could be of interest to other related researchers. \n\n## Weaknesses: \n**Technical originality:** The technical originality of the paper is marginal. The proposed method is an extension to damping belief propagation. Leveraging attentions to weight messages has been widely explored in different tasks [a].\n> [a] Zhang, Li, et al. \"Dynamic graph message passing networks.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. \n\n**Quality:** The quality of the paper can be improved in two aspects: 1) there are missing related works; 2) the proposed method is overall lack of theoretical justifications. \n\n**Significance:** The proposed DABP outperforms SOTA baselines only marginally while the computing time is significantly longer than SOTA baselines. In Table 1, on small-work networks with |X|=100, DABP achieves cost 25.65 which is slightly better than DBP-SCFG (cost = 26.13). However, DABP takes 7m31s while DBP-SCFG only takes 2m10s. The proposed training objective (Eq. 14) seems to be of high computational complexity. \n\n***I detail my comments on the weaknesses in the Question section below.*** \n ## Theoretical justifications are insufficient\n1. It is not clear how dynamic damping factors improve the inference performance theoretically.\n\n2. Though weighted neural messages under a graph neural network framework have been shown to be effective in improving performance by existing works, the messages in belief propagation algorithm are not weighted for aggregation by theory. There is a lack of justification on weighting messages. Furthermore, it is not clear to me why the message weights should vary from iteration to iteration.\n\n3. Is there any theoretical justification on how the inference performance is affected by message weights? \n\n4. Since the proposed smoothed cost can be considered as a relaxation of the original objective of COPs, it would be good if the authors can provide theoretical justifications on how the inference error is introduced by the relaxation. \n\n## Some related works are missing\nExisting optimization-based algorithms are developed based on belief propagation algorithm. Messages and beliefs are improved from the variational belief propagation point of view, e.g., the tree-reweighted belief propagation [a] and factional belief propagation [b]. It would be better if the authors can provide a comparison between the proposed method and related works in revising messages / beliefs. \n\n> [a] Wainwright, Martin J., Tommi S. Jaakkola, and Alan S. Willsky. \"Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo-moment matching.\" International Workshop on Artificial Intelligence and Statistics. PMLR, 2003.\n\n> [b] Wiegerinck, Wim, and Tom Heskes. \"Fractional belief propagation.\" Advances in Neural Information Processing Systems 15 (2002).\n\n## Computational complexity\nThe training objective (Eq. 14) requires a summation over all $\\tau$ and $f$, which would be of high computational complexity. Can the proposed method scale up to large models?\n\n**Minor question:**\n1. In line 10, 'massage' should be 'message'.\n2. $M_\\theta$ in Eq. 7 is not explained. \n The authors provide discussions on possible limitations of the proposed method.",
" The authors propose a self-supervised DNN Based Belief Propagation solver for Constraint Optimization Problem. \nThey start from the Dumping Belief Propagation and define an approach to have a dynamic Dumping factor and neighbor weights for the computation of the messages. The work is well organized and there are extensive experiments with different benchmarks and models.\nSome other experiments should be performed to have a fair comparison with simple DBP. Rows 40-46 --> Please cite properly\n\nEquations --> The formalism used to describe the message could be misleading. Also to be coherent with the summation I suggest to use:\n$\\mu_{f_m \\rightarrow x_i}^{t-1}$ for message sent from factor node $f_m$ to variable node $x_i$. In the same way for the message from the variable node to the factor node.\n\nEquation 3 --> $\\mu_{x_j \\rightarrow f_l}^{t-1}$ is not introduced\n\nEquation 4 --> what is the meaning of the dependence $\\mu_{l \\rightarrow i}^{t-1}(\\tau_i)$ by $\\tau_i$? It is not described.\n\nUsually in the damping BP the weights of the old estimate and of the new estimate sum to 1. \n\nIn Eq.6 authors introduce a scaling factor that increases the contribution of the new estimates on the basis of the number of the neighbor nodes. This means that if there are a lot of neighbors the new estimate has higher contribution. The authors should describe better the rationale behind this choice.\n\nRow 217 --> Do the authors mean in term of time? The given motivation is related to the time to reach a solution.\n\nTable 1 - WGCP --> The test on this benchmark requires more discussion. The Toulbar2 outperforms other methods dramatically for |X|=60 and MBE require 0s... please check the results on this benchmark and discuss a little bit more on it.\n\nDBP presents little bit worse results (except for WGCP). Do the authors have used 0.9 as Dumping Factor but, since it is necessary to fine-tune this parameter (as they in stated in section 3), I'd like to see a test with different dumping factor because the selected dumping factor could be not a good choice. The same for DBP-SCFG.\n\nFig. 5 I suggest to increase the bars' width. Authors described the main limitation, that the extension to other scenarios is not straightforward. \nThe extension will be study in the future works because deserve more investigations."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"Ttm6q2-1-ZM",
"h8hq0MRGFwV",
"EXWHe1EzI-",
"iZeJF64KA9C",
"051ThR0SfJU",
"2I5yS8kyQUN",
"bKU3YrpIsCM",
"hEtc543WXkg",
"nJJ4X0k-_Zi",
"nips_2022_SZDqCOv6vTB",
"nips_2022_SZDqCOv6vTB",
"nips_2022_SZDqCOv6vTB",
"nips_2022_SZDqCOv6vTB"
] |
nips_2022_FgDzS8_Fz7c | Category-Level 6D Object Pose Estimation in the Wild: A Semi-Supervised Learning Approach and A New Dataset | 6D object pose estimation is one of the fundamental problems in computer vision and robotics research. While a lot of recent efforts have been made on generalizing pose estimation to novel object instances within the same category, namely category-level 6D pose estimation, it is still restricted in constrained environments given the limited number of annotated data. In this paper, we collect Wild6D, a new unlabeled RGBD object video dataset with diverse instances and backgrounds. We utilize this data to generalize category-level 6D object pose estimation in the wild with semi-supervised learning. We propose a new model, called Rendering for Pose estimation network RePoNet), that is jointly trained using the free ground-truths with the synthetic data, and a silhouette matching objective function on the real-world data. Without using any 3D annotations on real data, our method outperforms state-of-the-art methods on the previous dataset and our Wild6D test set (with manual annotations for evaluation) by a large margin. Project page with Wild6D data: \url{https://oasisyang.github.io/semi-pose/}. | Accept | This paper received 4 reviews with the following scores: SR - BR - WA - A. The reviewers acknowledged importance of the addressed problem, the dataset contribution, clear presentation, and a meaningful approach with solid empirical performance. Main disagreements were around comparisons with existing methods (some published at CVPR'22), and fairness of training setups (supervised-only vs augmentation with real data).
The AC confirms that per NeurIPS policies the lack of comparisons with CVPR'22 publications indeed cannot be a basis for rejection.
However, looking at the additional results in Table 5 (including the CVPR 2022 paper) it looks like methods have actually been compared on both datasets using the various training regimes. Given that and that the remaining concerns of the reviewers were largely addressed in the rebuttal, both the AC and the SAC recommend acceptance. | train | [
"bj01odG7H6l",
"W_xxLCPORkZ",
"ITprRje415Y",
"K1fsGZZzOee",
"AzESPHo3zM",
"BsEGND-BSwK",
"eTcf7uY8MV7",
"fap1Js30L8l",
"w3aDelysmNm",
"Z4YMP-01Rvj",
"P-ACXnBn5HT"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for responding to and addressing my questions and comments.",
" Dear ACs and Reviewers, \n\nThank you so much again for the detailed feedback. We have reached half way of the author-reviewer discussion period. However, there are no responses yet to our replies.\n\nPlease do not hesitate to let us know if there are any further information and clarification we can provide. We hope to deliver all the information in time before the deadline.\n\nThank you!\nPaper3142 Authors",
" We thank the reviewer for the comments. We address each concern in the following. We have also highlighted the new contents in our revised paper with the color red. \n\n---\n\n**Q1**: Although the proposed dataset has lots of instances, it is limited to 5 categories. I presume this is to keep it consistent with the NOCS synthetic dataset. However, this limitation should be mentioned, in my view. \n\n**A1**: Yes, we agree with the reviewer that the categories are selected to be aligned with NOCS for better comparisons. **We have added a statement for this in the limitation section of the revised paper.** \n\n---\n\n\n**Q2**: why was PSPNet chosen? What is the rationale for combining RGB and geometric features in that way?\n\n**A2**: PSPNet is one of the most popular frameworks to obtain dense representation from RGB images. It is also applied in previous literature including Shape-Prior and CASS for the same purpose. We apply the same backbone for better comparisons. For combining the RGB feature and geometric feature, since both features are going to be projected to the 3D space for the following process, instead of regular convolutions, GCN provides an ideal manner to propagate the information efficiently among unorder 3D points and perform information fusion from two inputs. \n\n---\n\n\n**Q3**: The description of the Shape Network is missing details. It's not clear how the categorical prior is implemented. For instance, does it take a random vector as input?\n\n**A3**: The categorical prior input for the Shape Network is a predefined category-level mesh, which has 1024 vertices. The Shape Network will output the deformation vector for each vertex of the mesh conditioned on the image input. We have added more details of Shape Network architecture **in the revised supplementary material**. \n\n---\n\n**Q4**: The results from Mask R-CNN can often be noisy, especially at the boundaries. It's not clear how this influences the result. It would have been nice to discuss this.\n\n**A4**: While the Mask R-CNN results are not always perfect, given the object-centric Wild6D video dataset, the task becomes relatively simple and we can obtain high-quality segmentation in most cases. We have actually provided a discussion in our original supplementary material on Mask R-CNN and the segmentation result examples are shown in Figure 1 in our original supplementary material. We could further improve the segmentation results by using temporal consistency and tracking, but we find it unnecessary since the current masks are already satisfactory enough for providing training signals. \n\n---\n\n**Q5**: How does Mask R-CNN quality affect the performance of the differentiable renderer.\n\n**A5**: Please refer to the **Q4** and **A4**.\n\n---\n\n**Q6**: Could more categories be trained using the proposed approach\n\n**A6**: We believe our framework can be generalized to more categories than Wild6D, and we plan to extend the Wild6D with more categories and conduct more experiments on it.",
" We thank the reviewer for the comments. We address each concern in the following. We have also highlighted the new contents in our revised paper with the color red. \n\n---\n\n**Q1**: References are not always precise, as lines 207-209 [46] do not even mention the NOCS. \n\n**A1**: The NOCS map is widely utilized in category-level 6D pose estimation. While the method in [45] is for instance-level pose estimation, it has mentioned the NOCS paper multiple times such as “Similar to [46], we let the network predict a normalized representation of MXYZ.” in their paper. Thus, NOCS has indeed been mentioned and there are commons between the representations in [45] and NOCS. We believe it is correct to state both NOCS and the representations in [46] can reflect the geometric shape of the objects in line 207. \n\nTo avoid confusion, we **change the sentence in 207 in our revised pdf** to “The NOCS map also explicitly reflects the geometric shape information of objects; similar properties are also shown with representations in [46].” \n\n---\n\n**Q2**: No details on the accuracy of the deformation network and the loss used are provided, nor qualitative results.\n\n**A2**: We have provided the reconstruction loss in **Line 262-263 for the deformation network (Equation (7))**. The deformation network will also receive gradients back-propagated from other losses applied in the end, including reconstruction loss and silhouette matching loss. We have also reported the performance of object shape reconstruction on NOCS Real275 in our original supplementary material pdf. We kindly request the reviewer to look into Table 2 in our supplementary pdf. \n**We have also added additional reconstruction visualization in the revised supplementary materials.** Please look into Fig 3 in the supplementary material pdf.\n\n---\n\n**Q3**: The whole architecture, collecting several modules and networks, lacks substantial motivation. Indeed, the two losses, the one for fully supervised training and the one for semi-supervised training, differ only in the data they collect. The balance parameters are the same, too… \n\n**A3**: We argue the simplicity of not over-tuning the hyperparameters on losses should be an advantage instead of a disadvantage. We use the same hyperparameter/weight for all losses, and still, show substantial improvement with our method. This actually shows the robustness of our approach instead of a lack of motivation. We could always tune the hyperparameters to obtain even better results, but our motivation is clear: We want to establish a simple and reproducible framework for generalizing 6D object pose in the wild. Paired with a new proposed dataset, we hope to allow more researchers to follow this study.\n\n---\n\n**Q4**: The choice of training with the fully annotated synthetic images seems to be necessary for dealing with the prior shape. The motivation that synthetic data are at no cost is not convincing.\n\n**A4**: The cost of labeling synthetic data is far less than real data. It is common sense that synthetic labels are free in other 6D pose estimation papers, as well as general computer vision papers. The reason is that given the 3D models and data in the simulator, we can render the images and depth from different angles as many as we want automatically by writing a script. As the script is the one to generate the poses and cameras for rendering, thus this process comes with the pose and camera labels for free. Writing a script and computation for rendering are considered “free” in the context of comparing to manual labeling.\n\n---\n\n**Q5**: Recent works use only real data, following a different semi-supervised pattern to minimize the amount of training data. For example, \"GPV-Pose: Category-level Object Pose Estimation via Geometry-guided Point-wise Voting… FS-Net [6] also uses only 'real data' for training, despite Table 5 treating it as fully supervised.\n\n**A5**: As mentioned above, it is generally considered that real data annotations are the cost and synthetic labels are free. Thus we took FS-Net as fully supervised. **We have added a new note stating it only used real data for training in our revision**. For GPV-Pose, we would also like to remind the reviewer that the paper is just published in CVPR’22, after the NeurIPS submission deadline, which is considered a concurrent work according to the NeurIPS policy. Nevertheless, **we have included their results in our revision as well**. We would like to emphasize again that our method did not use any real data annotations, and it is expected that methods using real data annotations have more advantages. \n\n---\n",
" **Q6**:On the other hand, recent works are relying only on synthetic datasets such as \"CPPF: Towards Robust Category-Level 9D Pose Estimation in the Wild\" (CVPR2022), \"Self-Supervised Category-Level 6D Object Pose Estimation with Deep Implicit Shape Representation\" (AAAI2022), \"Category level object pose estimation via neural analysis-by-synthesis\" (ECCV2020), which the authors do not consider.\n\n**A6**: We have included **all the results in the table below and our revision**. Our approach achieves much better performance compared to the mentioned papers. Note the ECCV 2020 paper did not report numbers but curves, thus we can only obtain the rough numbers from measuring the curves in the table below. But we are confident the results are far worse than our approach. We do not include the ECCV 2020 results in our revision given it is only roughly measured. \n\n| Method | IOU\\@0.5 | 5cm, 5 degree| 10cm, 5 degree |\n|----------|--------------|----------|-------------|\n| ECCV 2020| -- | 10.0 | 13.5 | 17.5\n| AAAI 2022 | 73.0 |19.6 | | 54.5 | \n| AAAI 2022 w/ ICP | 72.7 | 33.4 | 62.9 |\n| CVPR 2022 | 26.4 | 16.9 | 44.9 |\n| Ours | **76.0** | **33.9** | **63.0**|\n\n---\n\n**Q7**: Please explain the implicit functions $\\Phi_{nocs}$, and what is used to train the MLP.\n\n**A7**: The details for $\\Phi_{nocs}$ are explained in Line 197-201 before equation (1). The inputs include: (i) The concatenation of the RGBD feature $f_{rgbd}^i$ and the global shape prior feature (the max pooling values of $f_{cate}$ across all points), and (ii) the 3D position of each point of input RGBD image. The output is the NOCS coordinate corresponding to the input point. The MLP is trained end-to-end via back-propagation from the losses applied in the end, including disentangled 6D pose loss, silhouette matching loss, and NOCS regression loss (when ground truth is available).\n\n---\n\n**Q8**:Please explain $\\Phi_{deform}$, how the accuracy of $ \\Phi_{deform}$ is evaluated, and the loss used. \n\n**A8**: The details for $\\Phi_{nocs}$ are explained in Line 226-232 before equation (2). The inputs include: (i) The concatenation of the global RGBD feature (the max pooling values of $f_{rgbd}$ across all points) and the shape prior feature $f_{cate}^i$, and (ii) the 3D vertex position of each point on the shape prior. The output is the deformation applied on each vertex. The network $\\Phi_{deform}$ is trained with the reconstruction loss (Equation (7)) and the silhouette matching loss. \n\nWe have reported the 3D shape reconstruction performance on NOCS Real275 in our original supplementary material pdf. We kindly request the reviewer to look into Table 2 in our supplementary pdf.\n\n---\n\n**Q9**: Even if no qualitative results on the shape are provided, line 320 reads ‘our approach can perform shape reconstruction. How relevant is the shape to generate the object bounding box? \n\n**A9**: We have reported the performance of object shape reconstruction on NOCS Real275 in our original supplementary material pdf Table 2. The shape reconstruction will provide the scale of the 3D bounding box which can be used as the object scale during inference.\n\n---\n\n**Q10**: Why it is not used in inference (lines 224-225). If the Shape network is not used in inference, is the mask required? How is it used? Please provide the parameters of the whole network and possibly runtime.\n\n**A10**: The shape network is used, but not for rotation and translation estimation. **We have modified the sentence for avoiding confusion in our revised pdf**. The shape reconstruction is only used to compute the scale of the object, while for the 6D pose during inference, i.e. rotation and translation, we solve the Umeyama algorithm based on the estimated NOCS map and the depth map. The mask is required to segment the object during inference and we follow [3,39,46] to obtain masks via the off-shelf instance segmentation model. The total number of parameters is 22,635,313, the total size is 82.14MB and the runtime is around 10-15 FPS when testing on a single GTX 3090 GPU.\n\n---\n\n**Q11**: Please report correctly the comparisons with the semi-supervised methods using only real data, such as FS-Net.\n\n**A11**: Please refer to the **Q5** and **A5**.",
" We thank the reviewer for the comments. We address each concern in the following. We have also highlighted the new contents in our revised paper with the color red. \n\n---\n\n**Q1**: The rather accurate IMU could have been used to track the phone's movement\n\n**A1**: In order to capture more accurate depth maps with higher resolution, we record all the videos via the front camera. At that point, the API provided by ARKit doesn’t support the camera pose estimation of the front camera. Thus we were not able to track the phone's movement. \n\n---\n\n**Q2**: What is the estimated accuracy of the annotated poses, based on the errors introduced in labeling and tracking?\n\n**A2**: To perform annotations, we track the poses within a very small interval, i.e. 50 frames. We ask the mechanical turkers to check the tracking results, correct the wrong ones, and ensure most annotations are correct. We are not able to provide an estimation but we are confident about the quality of our annotations for test data.\n\n---\n\n**Q3**: Is the instance segmentation network pre-trained (and if yes, how / on which data), or trained along with the rest of the network in an end-to-end manner? For the latter case, I'd expect it to be rather unstable at the beginning of the training.\n\n**A3**: We use the off-shelf instance segmentation model pretrained on the COCO dataset. Since the objects are relatively centered in our dataset, most segmentation results are accurate and we show some examples in our supplementary material pdf (Figure 1). \n\n---\n",
" We thank the reviewer for the comments. We address each concern in the following. We have also highlighted the new contents in our revised paper with the color red. \n\n---\n\n**Q1:** The value of IOU-0.5 of CPS++ reported in Table 5 is way much lower than the value reported in the paper.\n\n**A1:** We report the results from Table5 (Top part) of the CPS++ paper, which is without using ICP for post-processing. This setting is consistent with our approach where we did not apply ICP post-processing for fair comparisons. We assume the reviewer is referring to the CPS++ results with ICP, which is **still worse than our approach** as shown below. We have also added this line of results in our paper for completeness. \n\n| Method | IOU\\@0.5 | 5cm, 5 degree| 10cm, 5 degree |\n|----------|--------------|----------|-------------|\n| CPS++ w ICP | 72.8 | 25.2 | <58.6 |\n| Ours | **76.0** | **33.9** | **63.0**|\n\n---\n\n\n**Q2**: While many SOTA unsupervised algorithms, for example, UDA-COPE [1], behave better than RePoNet on the REAL275 dataset without any real annotations. It is necessary to compare more relevant SOTA methods for a fair comparison.\n\n**A2**: We would like to kindly remind the reviewer that UDA-COPE is just published in CVPR2022 (code not available), after the NeruIPS deadline. It should be considered “concurrent to NeurIPS submissions” according to the NeurIPS policy. Our focus in this paper is on in-the-wild pose estimation. Although we perform ablation on REAL275, our goal and main contributions are both the new Wild6D dataset and the performance there. Finally, our performance is comparable to UDA-COPE on REAL275: For example, on the metric of 5 degree, 2cm, our method achieves 30.7% better than UDA-COPE with 30.4%, on the metric of 5 degree, 5cm, our method achieves 33.9% and UDA-COPE achieves 34.8%. We have added the result of UDA-COPE in our revised paper. \n\n---\n\n\n**Q3**: The author validated all these algorithms on the Wild6D and then emphasized the superiority of RePoNet. The lack of consistency in the training dataset makes this comparison, again, unfair and invalid.\n\n**A3**: CASS, Shape-Prior, DualPoseNet are not trained with unlabeled Wild6D because they are not able to. It is nontrivial to leverage unlabeled data for 6D pose estimation, which is exactly one of the main contributions of our work. We also emphasize that the previous methods utilize annotations of real-world data while ours do not. We argue it is unreasonable to ask for aligning training data settings in this particular case since our goal is to prove our method of being able to use more unlabeled data (Wild6D) is better than previous approaches using the labeled dataset (REAL275). \n\n---\n\n\n**Q4**:No comparison with FS-Net.\n\n**A4**: As mentioned by the reviewer that FS-Net has not released the model, we have tried to train FS-Net with the released code following their instructions, however, it does not work as same as mentioned in their paper. Multiple other users from Github have also reported that the performance cannot be reproduced from the released code: https://github.com/DC1991/FS_Net/issues/14 We kindly ask the reviewer to look into the issues brought up in the Github repo, most of the critical ones have not been addressed by the authors. \n\nWe would also like to mention that the outstanding performance of IOU in FS-Net comes from a better tuned 2D object detector with YOLOv3 for pre-processing before pose estimation, thus the comparisons are also unfair.\n\n---\n",
" The paper proposes a method that utilizing unlabeled data for training to enable the proposed category-level 6Dpose estimation algorithm generalised to new scenarios. A new dataset is collected for implementing the idea. Comparisonal experiments were conducted on the REAL275 dataset and the proposed Wild6D dataset. The proposed algorithm has certain advantages over the listed algorithms. \n Strengths:\n\n1. The new method is able to utlizing the unlabeled real-scene data.\n\n2. The authors give a large RGB-D real dataset and annotate the testing dataset, which may be useful for certain tasks.\n\n3. The authors promise to make the code publicly available\n\nWeaknesses and Questions:\n\n1. The value of IOU-0.5 of CPS++ reported in Table 5 is way much lower than the value reported in the paper [1], please explain this inconsistency.\n\n2. The author simply compared the performance of RePoNet with CPS++ when presenting the effectiveness of RePoNet on processing unlabeled dataset. While many SOTA unsupervised algorithms, for example, UDA-COPE [1], behaves better than RePoNet on the REAL275 dataset without any real annotations. It is necessary to compare more relevant SOTA methods for a fair comparison.\n\n3. In Table 6, the CASS, Shape-Prior, DualPoseNet are all trained on the CAMERA75+REAL275, while RePoNet-syn and RePoNet-semi are trained either on CAMERA75 or CAMERA75+Wild6D. The author validated all these algorithms on the Wild6D and then emphasize the superiority of RePoNet. The lack of consistency on training dataset makes this comparison, again, unfair and invalid.\n\n4. For line 327, the authors stated that 'FS-Net does not release the model, we cannot experiment on it.' I am not fully convinced by the explanation. Although the FS-Net doesn't release the pre-trained model, the code is available on GitHub (https://github.com/DC1991/FS_Net). Given the prominent performance of FS-Net on IOU-0.5, it is worth to try, which I believe it is also possible, to experiment with FS-Net on Wild6D dataset based on the published code. \n\n[1] 'UDA-COPE: Unsupervised Domain Adaptation for Category-level Object Pose Estimation', CVPR2022 please check the strengths and weaknesses. I didn't see any negative societal impact bring by this paper.",
" This dataset-and-method paper proposed a new unlabeled large-scale dataset for category 6D pose estimation in the wild, along with a new network design that can utilize that dataset in a weakly supervised manner. Most notably, the weakly supervision branch is on top of the desired shape and pose branches, thus allowing training both even without or with only a sporadic strong supervision signal. The proposed method is evaluated on existing datasets and existing methods are evaluated on the new dataset, showing that the proposed method exceeds state of the art. + The paper is well written and good to follow. All steps are well motivated and described, the network can likely be reproduced based on the detailed description and well-done figures.\n\n+ The dataset acquisition is well thought off. Using \"Turkers\" to take videos with their own phones - compared to scientists - is very likely to have a reduced bias regarding scene selection, acquisiton etc.\n\n+ The dataset is a significant improvement over prior datasets regarding size and variety. However, it is not fully annotated, thus potentially limiting its long-term impact.\n\n+ The semi-supervised loss is well thought off, especially that it is based on the 6DoF pose which is the actual target (compared to a parallel branch / head in the network).\n\n+ The proposed network and training protocol is well thought off. It is based on well established building blocks and combines them in a way that allows weakly supervised training. This allows it to efficiently leverage a large amount of unlabeled or weakly annotated real data.\n\n+ The experiments are convincing and exhaustive. The proposed method is validated against prior art based on established datasets. Ablation studies show the impact of different building blocks. Some prior art methods are evaluated on the proposed dataset, forming a baseline evaluation.\n\n+ The results are very good compared to prior art. (1) Since iPhones were used, presumably with an own app to also capture depth, the rather accurate IMU could have been used to track the phone's movement (compared to the data-dependend models TEASER++/ICP, l.145).\n\n(2) What is the estimated accuracy of the annotated poses, based on the errors introduced in labelling and tracking?\n\n(3) Is the instance segmentation network pre-trained (and if yes, how / on which data), or trained along with the rest of the network in an end-to-end manner? For the latter case, I'd expect it to be rather unstable in the beginning of the training. Yes, limitations and impact were appropriately addressed.",
" The paper proposes a new dataset for 6D pose estimation and a method named RePoNet, for estimating 6D object pose and size. The method falls in the category-level approach that predicts an object pose only based on its category. \nIn particular, the presented method is defined by two main branches a Pose Network and a Shape Network. The Pose Network uses the NOCS map as an intermediate representation to get the object pose, and the Shape Network uses the specific category 3D shape prior to leveraging the intra-class variation with the object mask.\n\nThe new dataset, named Wild6D, collects 5166 videos for five object categories under multiple views. \nThe method utilises the NOCS synthetic dataset, with all 6Dpose annotations, and the real dataset together with the proposed Wild6D to obtain masks and corresponding depth maps for training. The authors perform a detailed ablation analysis of the method on the datasets used. Results show that the proposed method outperforms the considered competing methods. Comparisons with state-of-the-art follow the metrics of [44].\n The paper poses the relevant question of mitigating the annotation effort in category-level 6D pose estimation. The authors propose to exploit a common strategy in many fields, based on training on synthetic images and resolving the domain gap with real images.\nThe authors refer to the NOCS dataset [44] (Camera25, with 300K synthetic images, and Real275, with 8K real images). At the same time, the authors' contribution is to use only synthetic images from CAMERA25, with full annotations including CAD models, segmentation and 6D pose parameters, and taking from Real275 only the RGBD information (depth map and masks). Due to the domain gap between synthetic and real images, the authors claim that the method is semi-supervised.\n\nThe authors also contribute with a new annotated dataset, Wild6D, built from videos recorded with iPhone and a depth sensor. The Wild6D is annotated with 6D object poses every 50 frames, and the intermediate ones follow a tracking strategy.\nThe strength of the paper is in the competitive results on the 6D pose obtained by the semi-supervised method. \n\nMajor weaknesses of the paper are:\n1. presentation.\n2. The motivation of using only the synthetic parts with all the annotations.\n3. A lack of comparison with most recent works and a discussion on them.\n\nThe description of the architecture is quite hard to follow as there are a huge amount of component networks, introduced via references, not always precise, as in lines 207-209 ([43] does not even mention the NOCS, and it refers to a dense correspondence map, see also \"Cdpn: Coordinates-based disentangled pose network for real-time RGB-based 6-dof object pose estimation\").\n\nThe main difficulty in category-level 6D pose estimation is the intra-class object variation. As in previous work, the paper uses a 'shape prior' to face the problem. Adapting the shape-prior to the current object refers to a deformation network (supposedly as in [37]). No details on the accuracy of the deformation network and the loss used are provided, nor qualitative results.\n\nThe whole architecture, collecting several modules and networks, lacks substantial motivation.\nIndeed, the two losses, the one for fully supervised training and the one for semi-supervised training, differ only in the data they collect. The balance parameters are the same, too, implying that there is no specificity in the semi-supervised learning for pose-sensitive features and geometric relationships.\n\nThe choice of training with the fully annotated synthetic images seems to be necessary for dealing with the prior shape. The motivation that synthetic data are at no cost is not convincing.\n\nRecent works use only real data, following a different semi-supervised pattern to minimise the amount of training data. \nFor example, \"GPV-Pose: Category-level Object Pose Estimation via Geometry-guided Point-wise Voting \"(CVPR2022) uses just real data for training and uses for evaluation not only CAMERA25 and REAL275 but also LineMod (not considered by the paper). FS-Net [6] also uses only 'real data' for training, despite Table 5 treating it as fully supervised. \n\nOn the other hand, there are recent works relying only on synthetic datasets such as \"CPPF: Towards Robust Category-Level 9D Pose Estimation in the Wild\" (CVPR2022), \"Self-Supervised Category-Level 6D Object Pose Estimation with Deep Implicit Shape Representation\" (AAAI2022), \"Category level object pose estimation via neural analysis-by-synthesis\" (ECCV202), which the authors do not consider.\n Please explain the implicit functions \\Psi_{nocs}, and what is used to train the MLP. \nPlease explain \\Psi_{deform}, how the accuracy of \\Psi_{deform} is evaluated and the loss used. \n\nEven if no qualitative results on the shape are provided, line 320 reads ‘our approach can perform shape reconstruction’. \nHow relevant is the shape to generate the object bounding box? Why it is not used in inference (lines 224-225).\n\nIf the shape network is not used in inference, is the mask required? How is it used?\nPlease provide the parameters of the whole network and possibly runtime.\n\nPlease report correctly the comparisons with the semi-supervised methods using only real data, such as FS-Net.\n Limitations are discussed, in particular, the difficulty to generalize. ",
" This paper presents a dataset and semi-supervised method for 6DoF object pose estimation with the aim of generalizing better. The first contribution is a dataset called Wild6D which is a dataset with 5166 RGBD videos of 1722 object instances from 5 categories. The second contribution is a neural network RePoNet that is trained to estimate 6DoF pose by training on both synthetic data (with full supervision) and real data (partial supervision). Experimental results indicate that the proposed method outperforms other methods on the 6DoF pose estimation task. Overall, this paper has a nice contribution and I would recommend acceptance. In the following, I will list strengths, weaknesses, and other questions.\n\n## Strengths\n\n- The paper addresses an important problem of great practical importance. Generalizing 6DoF pose estimation is one of the grand challenges in 3D vision.\n- The dataset contribution is important and can help make rapid progress in the community.\n- The results are better than other state-of-the-art approaches.\n- The idea of using full supervision from synthetic data and partial supervision from real data is obvious, but surprisingly no other paper has tried it as far as I am aware.\n\n## Weaknesses\n\n- Although the proposed dataset has lots of instances, it is limited to 5 categories. I presume this is to keep it consistent with the NOCS synthetic dataset. However, this limitation should be mentioned, in my view.\n- While the architecture proposed in Figure 3 makes sense, there is not much justification for *why* this particular architecture was chosen over alternatives. For instance, why PSPNet was chosen? What is the rationale for combining RGB and geometric features in that way?\n- The description of the Shape Network is missing details. It's not clear how the categorical prior is implemented. For instance, does it take a random vector as input?\n- The results from Mask R-CNN can often be noisy, especially at the boundaries. It's not clear how this influences the result. It would have been nice to have a discussion on this.\n\n It would be nice to see a discussion of the following points (in addition to the above limitations) in the rebuttal:\n\n- How does Mask R-CNN quality affect the performance of the differentiable renderer?\n- Could more categories be trained using the proposed approach? The paper includes a short limitations section. It would have been nice to include a discussion of societal impacts -- this is missing currently."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
5
] | [
"ITprRje415Y",
"nips_2022_FgDzS8_Fz7c",
"P-ACXnBn5HT",
"Z4YMP-01Rvj",
"Z4YMP-01Rvj",
"w3aDelysmNm",
"fap1Js30L8l",
"nips_2022_FgDzS8_Fz7c",
"nips_2022_FgDzS8_Fz7c",
"nips_2022_FgDzS8_Fz7c",
"nips_2022_FgDzS8_Fz7c"
] |
nips_2022_IfFZr1gl0b | Uni-Mol: A Universal 3D Molecular Representation Learning Framework | Molecular representation learning (MRL) has gained tremendous attention due to its critical role in learning from limited supervised data for applications like drug design. In most MRL methods, molecules are treated as 1D sequential tokens or 2D topology graphs, limiting their ability to incorporate 3D information for downstream tasks and, in particular, making it almost impossible for 3D geometry prediction or generation. Herein, we propose Uni-Mol, a universal MRL framework that significantly enlarges the representation ability and application scope of MRL schemes. Uni-Mol is composed of two models with the same SE(3)-equivariant transformer architecture: a molecular pretraining model trained by 209M molecular conformations; a pocket pretraining model trained by 3M candidate protein pocket data. The two models are used independently for separate tasks, and are combined when used in protein-ligand binding tasks. By properly incorporating 3D information, Uni-Mol outperforms SOTA in 14/15 molecular property prediction tasks. Moreover, Uni-Mol achieves superior performance in 3D spatial tasks, including protein-ligand binding pose prediction, molecular conformation generation, etc. Finally, we show that Uni-Mol can be successfully applied to the tasks with few-shot data like pocket druggability prediction. | Reject | This paper proposes a new framework for molecular representation learning (MRL) using both 2D and 3D molecular data. This framework is general and applied to various problems (e.g., protein-ligand binding pose prediction and molecular conformation prediction). I believe this paper is potentially quite impactful and able to reshape how research is conducted for MRL research.
However, the contribution of this paper is unclear in its current form.
- The proposed methodology is not very novel and uses a combination of existing methods.
- While the main contribution of this paper is to propose a new framework for MRL, the experiments focus on evaluating a single algorithm (i.e., SE(3)-equivariant model + 3 self-supervised tasks) compared to existing algorithms under different frameworks. In other words, the experiments do not deliver new information since (1) existing works demonstrated how combining 2D & 3D data improves downstream task performance and (2) pretraining is useful for the considered downstream tasks.
Overall, I recommend rejection for this paper. However, I believe this paper can be a very strong submission for the next conference if the authors clearly demonstrate their contribution. For example, I think the proposed idea would be pleasantly presented as an important "benchmark" paper, rather than a framework with superior performance. | train | [
"s4ZAaJZY5gy",
"6PV7yHcXFPp",
"pGMAjn_byTL",
"vr-lLKZiOhH",
"-jeGmGfwQQM",
"Zhd-TZoWHQ0",
"ev4A1EXxID",
"s9TqG3LkgIwy",
"wZ0UXiJP5P",
"J4YP9qvto9p",
"VlIgJJsDQp6",
"7wQjSwTgTau",
"PnpE-7HLJh_",
"Uv-HpgJzDqK",
"Y2Ghep82GlE",
"jZWLM_S3Kq1",
"3djj7SBxF8H",
"BJHsVZ0g2ot",
"pD2kvQRCZ8O"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers, \n\nThank you so much for the comprehensive and insightful reviews. Based on the review comments, we made a significant effort to finish additional 5 ablation studies from scratch in the tight discussion period. And we have updated the Appendix with these new results, and more discussions according to review comments. We would very much appreciate it if you would consider raising your score in light of our response.\n\nWe thank \"Reviewer hrSe\" so much for your support and the quick response. \n\nAnd we also thank \"Reviewer Ue6N\" so much for the further comments, hope our new response to them and the revision in the Appendix can address your concerns. \n\nWe hope we can get the responses from \"Reviewer byGu\" and \"Reviewer Ms4D\" soon, as the \"Author- Reviewer Discussion\" will end in several hours. Your further comments and suggestions will be very helpful for us!\n\nThanks,\n\nAuthors",
" We thank all reviewers for your comprehensive and insightful reviews. Based on previous comments, we provide a revision of the paper, including more details on downstream tasks, more discussions with related works, a visualization of the attention map, and 5 ablation studies.\n\nDue to the paper length limit, we put these contents in the appendix temporarily. We will move them to the main body for the camera-ready version when an additional page will be allowed.\n\nThanks,\n\nAuthors",
" It seems some reviewers misunderstood the Uni-Mol framework and its \"components\". So we make a clarification here.\n\nIn Uni-Mol's framework, the whole backbone model is a standalone \"component\". Besides the backbone model, as shown in Figure 1 in the paper, the remaining components are two pertaining datasets, the tasks for self-supervised learning, the downstream tasks, and the methods to use the pretraining models to finetune them, especially for 3D downstream tasks. Therefore, we don't treat the detailed layers/components inside the backbone model as the components of Uni-Mol. Moreover, as the objective of Uni-Mol is not to develop (or find) a backbone model, we don't think it is necessary for us to enumerate the possible combination of backbone models.\n\nWe want to highlight again, that our contribution is \"to build up a new framework for tasks in drug design with a focus on organic molecule drugs, especially in the 3D tasks, which cannot be covered by previous frameworks.\"\n\nWe are not to develop a pretrained backbone model, nor a framework of the combination of existing works, but a new framework that can solve the 3D tasks which cannot be covered by previous frameworks.",
" We want to highlight again, that our contribution is \"to build up a new framework for tasks in drug design with a focus on organic molecule drugs, especially in the 3D tasks, which cannot be covered by previous frameworks.\" \n\nWe are not to develop a pretrained backbone model, nor a framework of the combination of existing works, but a new framework that can solve the 3D tasks which cannot be covered by previous frameworks.",
" We thank the reviewer for the additional comments. Please see our feedbacks as follows.\n- About the paper revision.\n - As we add many discussions according to the reviewer comments, it exceeds the paper length limit. And we are still waiting for the complete ablation studies to be finished. So we didn't update the paper for now. If you need us to update it, we can put them in the Appendix first, and update the supplementary material. (updated: we just update the supplementary material).\n- About ablation study.\n - We mainly focus on the real-world downstream applications in our experiment section and we don't think the ablation study about the pretraining is a necessary part for the completeness of the paper. So we don't agree that \"paper might have been considered as not completed\". However, we do agree that the ablation study is important, as it helps us to better understand the proposed backbone model. We thank the reviewer's comments, which helped us to improve our paper with *5* more ablation studies during the paper review. \n - We have updated the ablation study results on 15/15 tasks in the Appendix.\n- About the highlight of contribution. \n - Thank you for the suggestion, we will summarize our contributions in the introduction. \n - Regard to \"many figures or formulations to illustrate positional encoding, self-attention, etc. \", we actually only have 2 figures in the paper, one is the whole framework, and another one is the design of the backbone model. \n- About the motivation for using Transformer\n - We have added the visualization of attention into supplementary material.\n - One more motivation is that Transformer has a larger receptive field, as the nodes/atoms are fully connected. While in GNN, we usually cut off the edges by locality (distances, bonds). We believe the larger receptive field has more advantages in self-supervised pertaining, as it could learn the long-range interactions from large-scale unlabeled data. For example, in the last row of the attention visualization, there are some columns (21-27) that have slightly large attention weights, while the distances are also large.\n- We don't agree \"if the main contribution is a framework, that means the paper is mainly about engineering. I recognize the paper tries to resolve an interesting problem. However, an engineering or a framework design may be not that significant to machine learning.\". \n - First, our paper is submitted for \"Machine Learning for Sciences (e.g. biology, physics, health sciences, social sciences) https://nips.cc/Conferences/2022/CallForPapers\". Our goal is to use machine learning to improve the applications in drug design, not to improve machine learning itself. \n - second, our framework is not a simple engineering work, nor a combination of existing works. Simply using existing models and data for pre-training cannot solve the 3D tasks in drug design. For example, most existing GNN models cannot meet the requirements of output the 3D positions directly and be used in self-supervised tasks simultaneously. Even though the backbone model in Uni-Mol is simple, and some designs in it are inspired by previous works, it is still a new model designed for the entire Uni-Mol framework. \n- We don't agree \"Since the main contribution is a framework, which should not depend on specific components, it could be better to try different components. For example, replacing Transformer with GNN, to show the generalization ability. This would increase the contribution of the proposed framework.\"\n - In Uni-Mol framework, the backbone model needs to be 1) able to be used in self-supervised training; 2) able to encode 3D inputs; 3) able to output 3D positions. To our best knowledge, no previous model meets these requirements. So a simple replacement is not possible.\n - Maybe we have a different understanding of the \"component\" in the framework. In Uni-Mol's framework, the whole backbone model is a standalone \"component\". Besides the backbone model, as shown in Figure 1 in the paper, the remaining components are two pertaining datasets, the tasks for self-supervised learning, the downstream tasks, and the methods to use the pretraining models to finetune them, especially for 3D downstream tasks. Therefore, we don't treat the detailed layers inside the backbone model as the components of Uni-Mol. Moreover, as the objective of Uni-Mol is not to develop (or find) a new backbone model, we don't think it is necessary for us to enumerate the possible combination of backbone models.\n- We don't agree \"in which some core components have been used in existing works.\"\n - In Uni-Mol framework, we create the pertaining dataset by ourselves, design the backbone model and self-supervised task for learning 3D representation and 3D outputs, and design the finetune strategies in the various downstream 3D tasks. To our best knowledge, most of these components are new. ",
" Thank the authors for their rebuttal and their detailed responses to my review!\n\nA few comments:\n\n* Based on the authors' responses regarding the novelty, the main contribution is a *framework*, in which some core components have been used in existing works. First, I would suggest the authors provide a summary of novelty in the paper. Regarding the current paper writing, readers may misunderstand that the paper focuses on component design. The paper uses many figures or formulations to illustrate positional encoding, self-attention, etc. This may mislead readers about the contribution. Second, if the main contribution is a *framework*, that means the paper is mainly about engineering. I recognize the paper tries to resolve an interesting problem. However, an engineering or a *framework* design may be not that significant to machine learning. \n\n* It seems that the authors did not provide a revision of the paper. \n\n* The attention map is not provided. The reviewer understands Transformer has been used in existing works. However, just following them without a clear motivation is not that intereesting. But anyway, this won't be my concern for the paper. \n\n* The reviewer understands the rebuttal time is limited. However, the ablation study should be provided in the initial submission. Otherwise, the paper might have been considered as not completed. \n\n* Since the main contribution is a *framework*, which should not depend on specific components, it could be better to try different components. For example, replacing Transformer with GNN, to show the generalization ability. This would increase the contribution of the proposed *framework*.",
" Thank you, we just check the model in your mentioned paper, and the following are the differenes:\n1. Both Uni-Mol and 3D-Graphormer use the pair-wise Euclidean distance and Gaussian kernel to encode 3D spatial information. However, 3D-Graphormer has an additional node-level centrality encoding, which is the sum of spatial encodings of each node.\n2. 3D-Graphormer doesn't have pair-representation.\n3. Our SE(3) Coordinate Head is different from the \"node-level projection head\" in 3D-Graphormer. The method used in 3D-Graphormer is an attention layer for 3 axes in 3D coordinate.\n4. 3D-Graphormer is not designed for self-supervised pretraining.\n\nWe will also add the above comparison to the paper. And thank you again for supporting our work!",
" Thanks for the authors’ response. I don’t have further questions. Another paper I want to mention is [1], a following-up paper by Graphormer, adapting this method to 3D molecules. \n\nOverall, I think this is a good paper and can contribute to the research area of 3D molecules.\n\n[1] Shi, Yu, et al. \"Benchmarking graphormer on large-scale molecular modeling datasets.\" arXiv preprint arXiv:2203.04810 (2022).",
" **Ablation studies for pretraining and pair distance.** Sorry for missing the ablation studies. We have conducted ablation experiments for pretraining and positional encoding. To demonstrate the effectiveness of pair distance, we replace the original invariant spatial position encoding with a 2D Graphormer-like[6] shortest path positional encoding and a 1D BERT-like[7] relative position encoding on atoms. To demonstrate the effectiveness of pretraining, we train our model from scratch on the downstream dataset. The results are summarized in the following table. \n\nDataset|BBBP(%, ↑)|BACE(%, ↑)|QM7(↓)|QM8(↓)|\n:---:|:--:|:---:|:---:|:--:|\n**Uni-Mol**|**72.9(0.6)** |**85.7(0.2)**|**41.8(0.2)** |**0.0156(0.0001)**|\nw/o pretrain|69.0(0.7) |80.9(5.4)|45.2(0.6)|0.0174(0.0002)|\n2D shortest path encoding (Graphormer like)|71.6(2.1)|85.6(1.1)|60.6(0.2)|0.0164(0.0001)|\n1D BERT-like relative positional encodings on atoms|70.3(1.9)|77.8(3.7)|77.5(2.7)|0.0283(0.0007)|\n\nFrom the above table, it is clear that pretraining and 3D information indeed helps the performance of downstream tasks. Due to the tight timeline, we cannot finish all downstream tasks. We will add the full results to the paper in the next revision.\n\n**Contradictory contents about chemical bonds.** Thank you for pointing this out, we will make it clearer in the next version. They are actually not contradictory contents. First, a chemical bond is composed of two atoms and a bond. And a bond has its type and length. For example, C-H is a single bond between a C atom and an H atom, and its bond length is about 1.09 Å. Therefore, although chemical bonds are not directly saved in the 3D spatial encoding, we can easily infer the chemical bond based on the pair type and pair distance. For example, if the distance between a C atom and an H atom is 1.09 Å, we can easily infer that there is a C-H bond. And when one atom is masked, it is also very easy to infer its atom type based on the pair distance. \n\n**Vocabulary size.** First, the models for molecules and pockets are different; they don't need to share the same vocabulary. And the vocabulary is made based on the atoms' statistical information in the data. In pocket data, there are amino acids, whose atoms are mostly C, N, O, S and H. While in molecule data, the atom types are more diverse, so a larger vocabulary is used. \n\nThank you very much for reading. Please reconsider your rating on this paper if your questions and concerns are addressed.\n\n>Reference:\n\n[1] Dao T, Fu D Y, Ermon S, et al. FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness\n\n[2] Jumper J, Evans R, Pritzel A, et al. Highly accurate protein structure prediction with AlphaFold\n\n[3] Axelrod S, Gomez-Bombarelli R. GEOM, energy-annotated molecular conformations for property prediction and molecular generation\n\n[4] Fpocket: An open source platform for ligand pocket detection\n\n[5] A Geometric Deep Learning Approach to Predict Binding Conformations of Bioactive Molecules\n\n[6] Ying, Chengxuan, et al. \"Do transformers really perform badly for graph representation?.\"\n\n[7] Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding",
" We thank all reviewers for your comprehensive and insightful reviews, and have replied to your comments respectively.\nThere are some comments worrying about the novelty of Uni-Mol. It is sometimes tricky to evaluate the novelty of a work since one can hardly avoid being subjective for such an evaluation -- it's also hard for us to claim it on an absolutely objective matter. Yet we hope to discuss this issue and better explain the novelty of the work here.\n\nIt is important to realize that the objective of Uni-Mol is *not* to develop a pretrained backbone model, but to build up a new *framework* for tasks in drug design with a focus on organic molecule drugs, especially in the *3D tasks*, which cannot be covered by previous frameworks. Specifically, previous frameworks mostly focused on molecular property prediction tasks, and achieved competitive performance. Uni-Mol cannot only outperform them in these prediction tasks, but also extend the application scopes to many 3D tasks and achieved SOTA performance as well. Some of the components in the backbone model are inspired from some previous work, like AlphaFold, but we don't think this should affect the novelty of Uni-Mol in its application domain. Taking BERT as an example, will you think it is *not* novel due to its backbone model (Transformer) not being originally proposed by itself? \n\nWith this consideration in mind, let us try to address concerns like the *simplicity* of the backbone Transformer model. Simplicity doesn't imply the lack of novelty. Finding a simple and effective solution is usually not easy. There are many efforts behind the presented version of the model architecture (We can add a \"Failed Attempts\" section in the Appendix if needed). For example, to encode 3D spatial information, we tried several \"fancy\" methods used in previous works, like SchNet, DIMENet, and GemNet. And we found most of them achieve a similar performance as Gaussian. Following the principle of parsimony, we use the simplest Gaussian. Besides, based on Gaussian, we tried several methods to enhance the distance between different atom types, and found the simple affine transformation worked very well. There are several failed attempts. For example, we tried to explicitly encode chirality into the model, due to the current model cannot distinguish the symmetrical molecules. However, it is only slightly better, so we didn't use it. In the pretraining tasks, we also tried many strategies, for example, contrastive learning over different conformations, learning the mapping between 2D and 3D, etc. And we found the simplest masked atom prediction combined with the 3D position denoising task performs very well. In short, when two methods perform similarly, we always choose the simple one; when a fancy/complicated method is not necessary and doesn't bring much gain, we will not use it.\n\nHopefully, the above can address your concerns. If you still have any questions/concerns about the novelty, we can discuss them in this thread. \n\nThanks,\n\nAuthors",
" Thank you very much for the careful review! We will revise our paper to address your comments. \n\n**Max atoms.** Please note that we use the max atom as 256 because it is enough for the pocket (cover 99.998% pockets ). 256 is not a hard limit. During training, with gradient checkpointing, we can easily extend the atom number to 800+, by the V100 GPU with 32G memory. There are some recent works that can also significantly reduce the memory cost in Transformer, like Flash-Attention[1]. So we believe the max number of atoms will not be a limit. Besides, with an appropriate sampling strategy, even if the number of atoms could be limited in training time, we can use much more atoms at inference time and still achieve good performance. For example, in Alphafold[2], the training only samples 256/384 residues for saving memories and efficiency, but the inference can use thousands of residues. \n\n**For molecular conformation generation task's data.** Thank you for pointing this out, and we will add more details in the paper about this. Yes, we leverage RDKit (ETKGD) for generating inputs in molecular conformation generation tasks. Specifically, in finetuning, we randomly generate 100 conformations and cluster them into 10 conformations, as the model input. A similar pipeline is used in the inference of test data. For most baselines, as they aim to generate conformations from scratch, RDKit-generated conformations are not leveraged. We did not check whether any molecules exist in both pretraining data set and test set of molecular generation. As the same input conformation generation method is used in pretraining and finetuning, and the label of the test set is the accurate conformation generated by semi-empirical density functional theory (DFT)[3], we believe there is no data leakage in the test set. \n\n**Fpocket Score and Druggability Score.** Fpocket tool [4] will output 4 scores, Fpocket score, Druggability score, Total SASA, and Hydrophobicity Score. We call these 4 scores Fpocket scores (an \"s\" here). Specifically, the Fpocket score is a custom score by Fpocket; the druggability score is an empirical score calculated from evolution and homologous information. \n\nPerformance of Fpocket tool on NRDLD dataset.\n\n| |Accuracy|Recall|Precision|F1-score|\n:---:|:--:|:---:|:---:|:--:|\nFpocket score|0.73 |0.83|0.76|0.79|\nDruggability Score|0.78|0.83|0.83|0.83|\n\n**Differential evolution algorithm used in protein-ligand pairs.** We use a differential evolution algorithm inspired by Deepdock[5]. We sample 10 RDKit conformations from the uniform dihedral angle in rotatable bonds, then choose the lowest score function in evolution sampling as the final predicted ligand pose. Moreover, we also tried a faster method, by directly back-propagation from distance-based scoring function to input coordinates. Sorry for missing the details. We will add them in the next version of the paper.\n\n",
" Thank you very much for supporting our work and careful review! We will revise our paper to address your comments. \n\n**Comparison with Graphormer.** Graphormer[1] motivated us to use Transformer, and we also follow its simplicity in designing the Uni-Mol backbone model. However, the positional encoding (shortest path) used in Grahpormer can only handle 2D molecular graphs, not 3D positions. So we added several modifications to make the model have the ability to handle 3D inputs and outputs. Very sorry, we found the citation to Graphormer in the paper was accidentally removed in paper revisions, and we will add it back in the next version. \n\n**Comparison with newly-proposed graph DL-base methods on the protein-ligand binding pose prediction task.** Thank you for the suggestion. Equibind[2] is an excellent work, and we will add a discussion about it in the paper. However, we cannot have an apple-to-apple comparison, due to Equibind being proposed for Blind Docking. While Uni-Mol is currently designed for Targeted Docking, which follows most previous traditional tools in docking[3]. The difference is that Blind Docking uses whole protein for docking, while Target Docking directly uses the pocket. We will extend Uni-Mol to Blind Docking tasks in future work.\n\n**Extend to the whole protein docking.** This is a very good question. We can follow the two-stage solutions in the traditional docking pipeline: first detect a pocket, by tools like Fpocket[4], or by human, then perform the Targeted Docking in the specific pocket. We can also extend the Uni-Mol framework to support Blind Docking like Equibind. To do that, we may need to extend the pocket pretraining to the whole protein pretraining, and use the whole protein in the finetuning as well. The challenge is that the number of total atoms in the whole protein could be very large. So we may have to use $C_\\alpha$ atoms only.\n\nThank you very much for reading. Please reconsider your rating on this paper if your questions and concerns are addressed.\n\n>Reference:\n\n[1] Ying, Chengxuan, et al. \"Do transformers really perform badly for graph representation?.\"\n\n[2] Stärk, Hannes, et al. \"Equibind: Geometric deep learning for drug binding structure prediction.\"\n\n[3] GPU-Accelerated Drug Discovery with Docking on the Summit Supercomputer: Porting, Optimization, and Application to COVID-19 Research\n\n[4] Fpocket: An open source platform for ligand pocket detection",
" **Point Transformer.** Similar to previous atom-level models, the transformer models used in 3D vision are mostly designed for supervised learning, not self-supervised learning, and cannot output 3D positions. Therefore, it is hard to compare them directly. However, we still design an experiment for comparison per your request. Specifically, we replace the spatial encoding method used in Uni-Mol with the one used in Point Transformer. The results are summarized in the following table, and it clearly shows that Uni-Mol is better. \n\nDataset|BBBP(%, ↑)|BACE(%, ↑)|QM7(↓)|QM8(↓)|\n:---:|:--:|:---:|:---:|:--:|\n**Uni-Mol**|**72.9(0.6)** |**85.7(0.2)**|**41.8(0.2)** |**0.0156(0.0001)**|\nPoint Transformer|72.0(0.6)|84.1(1.3)|47.2(0.7)|0.0208(0.0002)|\n\n**Ablation studies.** Sorry for missing the ablation studies and thanks for your suggestion. We have conducted the ablation studies per your request. To demonstrate the effectiveness of invariant spatial positional encoding, we replace it with a 2D Graphormer-like shortest path positional encoding and a 1D BERT-like[3] relative position encoding on atoms. To demonstrate the effectiveness of pretraining, we train our model from scratch on the downstream dataset. And we only reserve the invariant spatial positional encoding and remove the update of pair representation to prove the effectiveness of the pair representation. The results are summarized in the following table.\n\nDataset|BBBP(%, ↑)|BACE(%, ↑)|QM7(↓)|QM8(↓)|\n:---:|:--:|:---:|:---:|:--:|\n**Uni-Mol**|**72.9(0.6)** |**85.7(0.2)**|**41.8(0.2)** |**0.0156(0.0001)**|\nw/o pretrain|69.0(0.7) |80.9(5.4)|45.2(0.6)|0.0174(0.0002)|\nw/o pair repr|71.6(1.3)|85.4(2.7)|45.2(1.0) |0.0158(0.0001)|\n2D shortest path encoding (Graphormer like)|71.6(2.1)|85.6(1.1)|60.6(0.2)|0.0164(0.0001)|\n1D BERT-like relative positional encodings on atoms|70.3(1.9)|77.8(3.7)|77.5(2.7)|0.0283(0.0007)|\n\nFrom the above table, it is clear that pretraining, pair representation, and 3D information indeed help the performance of downstream tasks. Due to the tight timeline, we cannot finish all downstream tasks. We will add the full results to the paper in the next revision.\n\nPlease note we did not conduct the ablation study for SE(3)-equivariance coordinate head. The SE(3)-equivariance coordinate head is introduced for the ability to output coordinates directly, thus broadening the application scopes of our method. It is not introduced to enhance the effectiveness of the model. Therefore, we think it is not necessary to perform ablation experiments on it.\n\n\nThank you very much for reading. Please reconsider your rating on this paper if your questions and concerns are addressed.\n\n>Reference:\n\n[1] Ying, Chengxuan, et al. \"Do transformers really perform badly for graph representation?.\"\n\n[2] Rong Y, Bian Y, Xu T, et al. Self-supervised graph transformer on large-scale molecular data\n\n[3] Devlin J, Chang M W, Lee K, et al. Bert: \nPre-training of deep bidirectional transformers for language understanding",
" Thank you very much for the careful review! We will revise our paper to address your comments. \n\n**Originality/Novelty of Uni-Mol.** Please refer to our response to all reviewers (\"On the novelty of Uni-Mol\"). \n\n**Euclidean distance map.** Also, refer to our response to all reviewers. The objective of Uni-Mol is not to develop a backbone model, nor to propose a better 3D encoding method, but to build up a framework for tasks in drug design with a focus on organic molecule drugs, especially in the 3D tasks. Besides, in our paper (L73), as we said \"we simply use Euclidean distances of all atom pairs\", we did not claim that our advantage/difference is the Euclidean distance map. What we claimed is that Uni-Mol can \"directly take 3D positions as both inputs and outputs\" (L42), not a new 3D encoding. \n\n**Atom-level baseline models.** As aforementioned, our objective is not to develop a pretrained backbone model. Besides, most previous atom-level models are designed for supervised learning, not for self-supervised/pretraining, and most of them cannot output the per-atom 3D positions. Therefore, most of the previous work cannot be directly used as the backbone model in Uni-Mol. For example, the paper you posted, \"Intrinsic-Extrinsic Convolution and Pooling for Learning on 3D Protein Structures\", is a ResNet-based model for classification, which cannot be used in Uni-Mol's downstream tasks which need 3D outputs, like molecular conformation generation and protein-ligand binding pose prediction. We will also add a discussion about this in the paper.\n\n**The pretraining method is very simple.** As mentioned in \"On the novelty of Uni-Mol\", simplicity does not imply the lack of novelty. Finding a simple and effective solution is usually not easy. We tried several pretraining tasks, for example, contrastive learning over different conformations, learning the mapping from 2D<->3D, etc. And we found the simplest masked atom prediction combined with the 3D position denoising task performs very well. \n\n**Discussion with protein representation learning works.** Thank you. We will discuss them in the paper. However, please note that our paper primarily deals with organic molecules, not proteins.\n\n**Motivation to employ Transformer.** Thank you for the suggestion, \"Transformer is the default backbone in representation learning\" indeed is not correct. We will change it to \"Transformer is widely used as a backbone model in representation learning\". And we will add more details about the motivation for using Transformer in the paper. In short, Transformer has shown its power in graph data recently, for example, Graphormer [1] won two champions at KDD CUP 2021 graph level track and NeurIPS 2021 Open Catalyst Challenge. And some previous works also use Transformer in molecular representation learning, like GROVER [2]. \n\n**Self-attention map visualization.** Thank you very much for the suggestion, we would like to add the self-attention map visualization to the paper. \n\n**Changes compared with vanilla Transformer.** Our design principle for the backbone model is as simple as possible, so we don't make many changes. All our modifications are summarised in Sec2.1 of the paper. \n\n",
" Thank you very much for supporting our work and careful review! We will revise our paper to address your comments. \n\n**Originality/Novelty of Uni-Mol.** Please refer to our response to all reviewers (\"On the novelty of Uni-Mol\"). \n\n**Missing pretraining literature.** We will add more literature on pretraining in the next version.\n\n**3D conformation data related.**\n1. The candidate pocket dataset was collected from the PDB database of protein structures surface experimentally resolved. L117~L122 describes how we detect, extract, clean, and enhance candidate pockets. \n2. About the bias in molecular conformation data. \n\n a. The same molecular conformation generation (computation) pipeline is used in both pretraining and finetuning. Therefore, in most downstream tasks, there is no bias between pretraining and downstream tasks. \n\n b. The space of molecular conformation is actually quite large, and in different environments, the stable (low-energy) conformations are also different. Therefore, it is non-trivial to get ground-truth annotations of molecular conformations, in both experiment and computation. Therefore, we use multiple conformations (10 in our current setting), to alleviate the possible bias, in both pretraining and finetuning.\n\n c. To obtain a conformational distribution that fully conforms to the laws of physics, long-term molecular dynamics simulations and conformational optimization at the density functional theory(DFT) level are required. However, it is computationally costly. Therefore, it is almost infeasible to generate large-scale pretraining data by that protocol. And it is also very inefficient to be used in real-world downstream tasks, most of which don't contain conformations and need to generate conformations on-the-fly.\n\n To summarize, due to the large space of molecular conformation, we choose an efficient method to generate multiple conformations for a molecule, to cover the conformation space as much as possible. Besides, it is able to generate a large-scale conformation dataset for pretraining, and be efficiently used in downstream tasks. Hopefully, the above response can address your concerns about the conformation data.\n\n**Noise sampling strategies.** We tried Gaussian distribution and truncated Gaussian distribution for coordinates noises. The experimental results showed slight decreases in performance for the downstream task. Besides, maybe due to the larger range of noise, Gaussian distribution sometimes caused numerical instability in the fp16 mix-precision training. So we use the uniform distribution. \n\n**Ablation studies for 3D information.** Sorry for missing the ablation studies. We have run an ablation experiment for this and got some results in the BBBP, BACE, QM7 and QM8 downstream tasks. To demonstrate the effectiveness of introducing 3D information, we replace the original invariant spatial position encoding with a 2D Graphormer-like[1] shortest path positional encoding and a 1D BERT-like[2] relative position encoding on atoms. The results are summarized in the following table. \nDataset|BBBP(%, ↑)|BACE(%, ↑)|QM7(↓)|QM8(↓)|\n:---:|:--:|:---:|:---:|:--:|\n**Uni-Mol**|**72.9(0.6)** |**85.7(0.2)**|**41.8(0.2)** |**0.0156(0.0001)**|\n2D shortest path encoding (Graphormer like)|71.6(2.1)|85.6(1.1)|60.6(0.2)|0.0164(0.0001)|\n1D BERT-like relative positional encodings on atoms|70.3(1.9)|77.8(3.7)|77.5(2.7)|0.0283(0.0007)|\n\nFrom the above table, it is clear that 3D information indeed helps the performance of downstream tasks. Due to the tight timeline, we cannot finish all downstream tasks. We will add the full results to the paper in the next revision.\n\n**Data release.** Yes, we will release all data and codes.\n\nThank you very much for reading. Please reconsider your rating on this paper if your questions and concerns are addressed.\n\n>Reference:\n\n[1] Ying, Chengxuan, et al. \"Do transformers really perform badly for graph representation?.\"\n\n[2] Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding",
" This paper proposes a method to incorporate 3D information into molecular representation learning, Uni-Mol. Particularly, Uni-Mol includes three parts: \n\n1. Two models (a molecular model and a pocket model) built upon the SE(3)-equivariant transformer architecture; \n2. Two large-scale datasets (a 209M molecular conformation dataset and a 3M candidate protein pocket dataset) and corresponding pretraining strategies; \n3. Finetuning strategies for various downstream tasks. \n\nIn the experiments, Uni-Mol outperforms SOTA in most of the tested tasks and demonstrates its ability in few-shot learning settings. **Strengths**:\n\n* (originality) The proposed method, Uni-Mol, is original and has reasonable designs: (1) For the backbone model architecture, modifications (including invariant spatial positional encoding, pair representation, and a coordinate head) have been made to the standard Transformer to adapt to 3D molecular representation; (2) Two new large-scale datasets as well as the proposed 3D position denoising task for pretraining; (3) Detailed guidances of how to fine-tune Uni-Mol on downstream tasks. \n* (quality) The experiments were conducted thoroughly and touched four aspects of molecular representation learning: molecular property prediction, molecular conformation generation, pocket property prediction, and protein-ligand binding pose prediction. For each task, the datasets and baselines are chosen carefully. \n* (significance) The proposed method Uni-Mol outperforms previous work in almost every tested task, which is impressive. In some tasks, like molecular conformation generation and pocket property prediction, the improvements are significant. \n* (clarity) The paper is presented clearly and easy to follow. Contributions are properly emphasized. \n\n**Weaknesses**:\n\n* (originality) The design of Uni-Mol is largely based on previous work. For the backbone model architecture, it is not new to use Transformers in molecular representation (e.g., GROVER), and most modifications can also be found in previous work (e.g., pairwise Euclidean distance matrix representation, the SE(3)-equivariant head as in EGNN, the masked atom prediction task). This limits the novelty of the proposed method. \n* (significance) Some experiment results didn't show significant improvement over previous methods (e.g., molecular property prediction tasks and ESOL). But this is acceptable since tasks like molecular property prediction have already been studied for long, and the baseline models are very competitive. \n* The related work focuses more on molecular representation methods, while the literature on pretraining has been ignored. * In line 114 and line 119, it is mentioned that the 3D conformations and pocket binding positions are obtained by using simulation or optimization toolkits. Will these toolkits introduce bias to the data? If yes, do you have any idea for alleviating such bias?\n* In line 130, it states that for the 3D position denoising task, the noise is uniformly sampled from [-1 A, 1 A]. How is the sampling strategy determined? Have you tried other sampling strategies like using a Gaussian distribution?\n* In line 204, it is concluded that 3D information helps the model to learn better representations. Are there any ablation study results that can justify this more convincingly? \n* Will the datasets be released for academic use? The limitations are not highlighted or summarized in the paper. \n\nTo me, my most concern is about the data used to pretrain the model. As mentioned in the Questions section, since the 3D information in the data mostly comes from computational results instead of ground-truth annotations, how do we know such bias in data won't influence the model performance? If bias is inevitable, what benefits can be guaranteed by using such data?",
" The paper proposes a pretraining / self-supervised learning method for 3D molecular representation learning. The backbone is Transformer based. The pretraining is adding noise to atom coordinates. The pretraining Experiments on molecular property prediction, molecular conformation generation, pocket property prediction and protein-ligand binding pose prediction show the effectiveness of the proposed method. \n 1. Among the early efforts, the paper proposes a pretraining / self-supervised learning method for universal 3D molecular representation\nlearning framework. \n\n2. The proposed method achieves SOTAs on several tasks and datasets.\n 1. The novelty of the proposed method seems not that significant. \n- One difference or advantage that the paper claims is 3D encoding. However, the Euclidean distance map (e.g.,[1]) or Gaussian functions for distance processing [2,3] has been widely used in protein 3D structure learning. \n\n[1] Highly accurate protein structure prediction with AlphaFold\n\n[2] GraphQA: protein model quality assessment using graph convolutional networks \n\n[3] Learning from Protein Structure with Geometric Vector Perceptrons\n\n- There have been a few methods for the atom-level molecule or protein 3D structure modeling (e.g., [4]). The proposed method should be compared with them via experiments.\n\n[4] Intrinsic-Extrinsic Convolution and Pooling for Learning on 3D Protein Structures\n\n- The pretraining method seems very simple. It is more like a trick. \n- (Minor) There are a few existing pretraining / self-supervised learning methods for protein representation learning. The authors might want to add a discussion about them.\n\n[5] Structure-aware Protein Self-supervised Learning\n\n[6] Contrastive Representation Learning for 3D Protein Structures\n\n[7] Protein Structure Representation Learning by Geometric Pretraining\n\n\n2. The motivation to employ Transformer is not clear. The explanation that \"Transformer is the default backbone in representation learning\" is not convincing or even incorrect. Moreover, as a Transformer-related work, it could be better to visualize the self-attention map to verify the motivation for employing Transformers. \n\n\n3. Besides adding the pair representation, is there any other difference with the original Transformer? Moreover, using Transformers for 3D modeling has been explored in 3D vision. Some important competitors should be compared, such as [8], to show the effectiveness of the proposed method. \n\n[8] Point Transformer.\n\n\n\n4. Some important experiments or ablation studies are missing. \n\na) Accuracy w/ and w/o pertaining. \n\nb) Effect of invariant spatial positional encoding.\n\nc) Effect of pair representation.\n\nd) Effect of pair representation.\n\ne) Effect of SE(3)-equivariance coordinate head.\n NA",
" This work provides representation learning for 3D molecules (like BERT in NLP). The authors provide two pretrained models using the same model architecture. The first one is trained on 209M molecular conformations, and the second one is trained on 3M protein pocket data. The model architecture is a modified Transformer, and the pretraining tasks include masked atom prediction and 3D position denoising. The pretrained models can be used for various tasks, such as molecular property prediction, molecular conformation generation, protein-ligand binding pose prediction, and pocket druggability prediction. The performance is very good. Strengths:\n1. This work is well-written and easy to follow.\n2. It provides two good pretrained models for small molecules and protein pockets. The models can be finetuned for various downstream tasks.\n3. The experiment results show that the provided models can outperform previous methods on various tasks.\n\nWeakness: \n1. About the model architecture: There are many existing Transformer architectures like [1] for graph representations. The authors should discuss the differences with these methods.\n[1] Ying, Chengxuan, et al. \"Do transformers really perform badly for graph representation?.\" \n2. About baseline results: For the protein-ligand binding pose prediction task, there are several graph DL-based methods like [2], I suggest the authors include such papers.\n[2] Stärk, Hannes, et al. \"Equibind: Geometric deep learning for drug binding structure prediction.\" \n 1. What is the difference between the proposed model architecture and existing Transformer architectures like Graphormer?\n2. I suggest the authors compare (or discuss) the newly-proposed graph DL-based methods on the protein-ligand binding pose prediction task.\n3. Can we extend protein pockets to whole proteins since in some cases, we don’t know the pocket? If not, what is the challenge? The authors have addressed the limitations of their work.",
" The authors proposed the same SE(3)-equivariant transformer variant, called Uni-Mol, for both molecular pretraining and protein pocket pretraining. To make the model SE(3)-equivariant, they devise an invariant spatial positional encoding for the transformer variant. Specifically, a affine transformation is used to fuse euclidean distance of atom pairs and types of atom pairs, then a Gaussian density function is used to get continuous representations of the atom pairs. After that, the self-attention mechanism takes the pair representation as a bias term in softmax calculation to better leverage 3D information of atoms. Following the idea of EGNN, the authors also propose a SE(3)-equivariant coordinate head to to directly predict coordinates of atoms. According to different downstream tasks, the two models are used independently or jointly. This manuscript describes a universal pretrained model for both molecular and protein pocket representation. Specifically, it shows an interesting way of integrating attention mechanism with pair Euclidean distance matrix and atom pair type matrix, keeping the transformer variant SE(3)-equivariant. The authors conduct many downstream tasks to evaluate the performance of the pretrained model and the results are mostly promising. However, it is a little pity that some aspects of the approach are not clearly communicated. 1、For molecular conformation generation task, what is the input of the model for fine-tuning? That is, does the model need some kind of ‘random atomic coordinates’ as input for molecular conformation optimization? Moreover, how to randomly generate 10 conformations as the authors mention in Sec. 3.2? I guess ETKGD is utilized again to generate coordinates as inputs of Uni-Mol. If so, do other models need 3d coordinates as input, i.e., using ETKGD in a similar way? Do the authors check whether any molecules exist in both pretraining data set and test set?\n2、For pocket property prediction task, the authors construct a regression dataset based on Fpocket tool. What is the difference between Fpocket Score and Druggability Score? Moreover, it seems only Fpocket Score is conducted in Table 4. Besides, how well Fpocket tool can perform in reality? Can the authors test this tool on NRDLD ( if possible )?\n3、For 3D prediction tasks of protein-ligand pairs, the authors claim that they use a simple differential evolution algorithm to sample and optimize the complex conformations. More details should be revealed.\n4、The authors should investigate the impact of the pair Euclidean distance matrix in ablation studies.\n5、The authors should demonstrate the effectiveness of pretraining process. That is, how much improvement can pretraining bring to model performance?\n6、The authors mention that 3D spatial positional encoding leaks chemicl bonds in Sec. 2.2 which is contradictory to some content in Sec. 2.1 (third to last sentence in invariant spatial positional encoding part).\n7、In Table 2 in supplementary material, max number of atoms is 256 for protein pocket. Is this value big enough for a protein? Moreover, why vocabulary size is different between molecules and proteins (30 for molecule and 9 for pocket )?\n\n 1、 The proposed SE(3)-equivariant transformer architecture has a limitation on the length of input molecules. In another word, it can not handle with molecules or protein pockets with length bigger than the max atom numbers (in this paper, the number is 256)."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"nips_2022_IfFZr1gl0b",
"nips_2022_IfFZr1gl0b",
"J4YP9qvto9p",
"-jeGmGfwQQM",
"Zhd-TZoWHQ0",
"PnpE-7HLJh_",
"s9TqG3LkgIwy",
"7wQjSwTgTau",
"VlIgJJsDQp6",
"nips_2022_IfFZr1gl0b",
"pD2kvQRCZ8O",
"BJHsVZ0g2ot",
"Uv-HpgJzDqK",
"3djj7SBxF8H",
"jZWLM_S3Kq1",
"nips_2022_IfFZr1gl0b",
"nips_2022_IfFZr1gl0b",
"nips_2022_IfFZr1gl0b",
"nips_2022_IfFZr1gl0b"
] |
nips_2022_k6WzeLZjxuP | Factored DRO: Factored Distributionally Robust Policies for Contextual Bandits | While there has been extensive work on learning from offline data for contextual multi-armed bandit settings, existing methods typically assume there is no environment shift: that the learned policy will operate in the same environmental process as that of data collection. However, this assumption may limit the use of these methods for many practical situations where there may be distribution shifts. In this work we propose Factored Distributionally Robust Optimization (Factored-DRO), which is able to separately handle distribution shifts in the context distribution and shifts in the reward generating process. Prior work that either ignores potential shifts in the context, or considers them jointly, can lead to performance that is too conservative, especially under certain forms of reward feedback. Our Factored-DRO objective mitigates this by considering the shifts separately, and our proposed estimators are consistent and converge asymptotically. We also introduce a practical algorithm and demonstrate promising empirical results in environments based on real-world datasets, such as voting outcomes and scene classification. | Accept | This paper studies off-policy learning with an environment shift, where the distributions of both contexts and rewards can change. The authors address both challenges in a factored form, and derive error bounds for both off-policy evaluation and optimization. The proposed approach is evaluated on real-world datasets. The original ratings of the paper were 6, 6, 6, 6, and 5; and they did not change after the rebuttal. The reviewers generally praised the paper and their main concerns were addressed in the rebuttal:
* Asymptotic errors bounds were replaced with finite-sample guarantees.
* Synthetic shifts in the distributions in experiments were complemented with actual shifts in data.
This is a good paper to accept and I believe that the authors will improve the paper further based on the feedback of the reviewers. | train | [
"NnMnTwiIesQ",
"ufWkYWJ3G_M",
"HRRwU43kR17",
"7c_XPY4MN5f",
"b5BQ9Uycu4X",
"xE2uyRVReonm",
"mZMd7DHewZC",
"F4FPeMp4a5Y",
"igXFAJr2UK",
"pi1taCsTLy",
"UujdRhgOhyO",
"qcjuBHyOtyX",
"ZzDEQ-KIaNH",
"QQ2tH3Wa6Uo",
"eUnDPEPDJpq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification. I do not have further questions. ",
" The response addressed my primary concerns, and filled all the technical \"holes\" I could point out in the review. I do think much of the technical content overlaps quite a bit with the work by Si et al. (2020), but the authors have demonstrated that there are important benefits to using the factorized distribution. Based on its contributions, I believe the paper should be accepted, and will maintain my positive score. ",
" The answers address most of my concerns. I do not have further concerns. Even though this is not the field I am thoroughly covering, I am still inclined to the acceptance of this paper. Such factorization seems a natural choice if there is no strong evidence for the need for modeling complex non-factored relations. Supporting theory seems without serious errors. Although I can see some concerns about the empirical evaluations from another reviewer, I will keep my positive score.",
" We thank all the reviewers for their helpful feedback! We highlight here some general changes in the rebuttal version of our submission:\n\n- **Updated Theoretical Analysis**: We updated our theoretical analysis and we have updated Theorem 2 and 3 to include rates of convergence which depend on the number of samples n. Our analysis is non-asymptotic and we show our algorithm has $\\sqrt{\\frac{log(n)}{n}}$ dependence on n.\n\n- **Updated Plots**: We updated Figure 1c for the Voting dataset to reflect a real-world shift that occurs between cities in the dataset. We find similar results as before where FDRO's performance increases on the test distribution as we increase the robustness parameter. We additionally update Figure 1f and the Appendix (Figure 3a) to include plots showing the performance in the Scene and simulated Bernoulli environments for different values of the reward shift radius hyperparameter, $\\delta_c$ (0.001, 0.1, 0.25). Empirically we found performance to not vary too much based on the choice of $\\delta_c$. \n\n- **Updated Discussions**: We update our discussions in Section 6.2 to provide more discussion comparing baseline-DRO and FDRO as well as the equivalence of solutions (under the heading 'Discussion: When to use FDRO vs baseline-DRO (equivalence of solutions)).\n",
" Thank you for bringing up these important points and questions. To address your questions about comparisons under our algorithm and baseline-DRO, we added an additional discussion, \"Discussion: When to use FDRO vs baseline-DRO (equivalence of solutions)\" to Section 6.2. We answer your questions (potentially out of order) below.\n\n- **Setting parameters for equivalent performance** Thank you for this question! In general, it is not always possible, for a given $\\delta_c$ and $\\delta_x$, to set $\\delta$ (in baseline DRO) such that the resulting algorithm will output the same policy as our method. Consider the KL-divergence chain rule decomposition: $D_{kl}(p_0(R, X)|| p(R,X)) = D_{kl}(p_0(X)||p(X)) + E_{p_0(x)} [D_{kl}(p_0(R|X)||p(R|X))]$. Recall $D_{kl}(p_0(R, X)|| p(R,X)) = \\delta$, $D_{kl}(p_0(X)||p(X)) = \\delta_x$, $D_{kl}(p_0(R|X)||p(R|X)) = \\delta_c$. We can see, when $\\delta_c$ is the same for each context, we have $\\delta = \\delta_x + \\delta_c$. When $\\delta$ of baseline-DRO is set to $\\delta_x + \\delta_c$ baseline-DRO and FDRO consider distributions at most the same KL distance away, with FDRO placing additional constraints on the set of distributions searched. From the decomposition, we can see, with additional stipulations effort, one can set a family of $\\delta_x$ and $\\delta_c$ values for FDRO to achieve the same solution of baseline-DRO for each $\\delta$ (we give additional details in appendix). However, for a given $\\delta_x$ and $\\delta_c$ it is not possible to find a $\\delta$ for baseline-DRO that achieves the same solution. We additionally update our appendix (supplementary material) to provide a detailed example in a simple 2 context, discrete reward setting in section A.4.4 that examines the difference of behavior under the two formulations as well as the conditions for equivalence.\n \n- **Why is baseline-DRO overly conservative? When is FDRO worse?**: From the decomposition given previously, we see Baseline-DRO has less constraints and can, for example, place the shift “budget” ($\\delta_x + \\delta_c$) all in the rewards or all in the contexts if that is the worst-case. This causes the degenerate performance in the binary rewards case for baseline-DRO mentioned in the text where reward shift dominates. FDRO can set limits on how much reward shift is considered and ensure context shift is considered, allowing us to overcome this. That said, as you mentioned FDRO does have more parameters which makes it more difficult to set. There may indeed be cases where FDRO's parameters are set incorrectly and perform worse than baseline-DRO.\n \n- **When to use FDRO vs baseline-DRO:** The degenerate binary example above informs us that generally, if the algorithm designer would like to ensure a certain amount of reward and context shift are considered (for example, as we need to in our updated Voting experiment with binary rewards, where we want to use data from one city to inform policies for other cities), then it may be beneficial to use FDRO and invest the additional effort to decompose $\\delta$ between $\\delta_x$ and $\\delta_c$. Additionally, we expect FRDO to be particularly useful when there are independent shifts in the reward and context distributions, which is a common setting, such as shifting locations, as there, $\\delta_x$ and $\\delta_c$ are easier to set. \n\n- **Error in FDRO and baseline-DRO:** As previously mentioned, when we set baseline-DRO's $\\delta = \\delta_x + \\delta_c$, then both FDRO and baseline-DRO arrive at solutions that search over worst-case distributions at most a distance of $\\delta$ away from the data generation distribution. They arrive at different solutions, as FDRO searches a more constrained space, however it is difficult without knowing the intended deployment distributions ahead of time to know which solution will work better.\n\n- **Sensitivity of parameters of FDRO:** Thank you for bringing up this point. We update Figure 1f and the Appendix to include plots showing the performance of the Scene and simulated Bernoulli environments for different values of the reward shift radius hyperparameter, $\\delta_c$ (0.001, 0.1, 0.25). Empirically we found performance to not vary too much based off of choice of $\\delta_c$. \n\n- **Theoretical Analysis:** Thank you for highlighting the need for updated theoretical analysis. We updated our theoretical analysis and we have updated Theorem 2 and 3 to include rates of convergence that depend on the number of samples n. Our analysis is non-asymptotic and we show our algorithm has $\\sqrt{\\frac{log(n)}{n}}$ dependence on n.\n\n**Did the above answer your questions? We also welcome additional questions or feedback.**\n",
" Thank you for your helpful feedback! We added additional prose between equation 2 and equations 4 and 5.\n\n- **How FDRO overcomes the degenerate case** Thank you for asking this question! To address this question, we update our discussions in Section 6.2 to provide more discussion comparing baseline-DRO and FDRO as well as the equivalence of solutions (under the heading 'Discussion: When to use FDRO vs baseline-DRO (equivalence of solutions)). In summary, we can see when we set the baseline-DRO parameter to $\\delta = \\delta_x + \\delta_c$ baseline-DRO and FDRO consider distributions the same KL distance away, with FDRO placing additional constraints on the set of distributions searched. Baseline-DRO has less constraints and can, for example, place the shift “budget” ($\\delta_x + \\delta_c$) all in the rewards or all in the contexts if that is the worst case. This causes the degenerate performance in the binary rewards case for baseline-DRO where reward shift dominates. FDRO can set limits on how much reward shift is considered and ensures context shift is considered, allowing FDRO to overcome this.\n\n**Did the above answer your questions? We also welcome additional questions.**\n",
" Thank you for your helpful comments!\n- **Taylor expansion impracticality**: Thank you for bringing up this concern! In our method, we use a Taylor expansion approximation of a finite order m. We note that while our context may be high dimensional, we are actually only taking the Taylor expansion with respect to the scalar $\\alpha_c$. Therefore the difficulty of the Taylor approximation does not scale with the context dimension space. We also wanted to clarify something that might have been confusing in our submitted text: we mention that one could train a model to predict each moment which indeed can be very computationally burdensome. However many common models can predict multidimensional outputs (ex. kernel regression, neural networks) and for those methods, we actually only need to train a single model that predicts all the moments. This is much more computationally efficient and for all our experiments we take this approach. We have updated Section 5 to reflect these points. \n- **When to consider context shift?**: Thank you for the opportunity to clarify this. We agree with the reviewer that there are cases where distribution shift indeed may not matter (e.g. see our section 3 “Note about misspecification”). Distribution shift is important when there is misspecification and the underlying policy class does not perfectly capture the underlying process. In these cases, the learned model encodes the biases of the population distribution. This can occur both when the function class is underspecified or when the function class is overspecified/when there is not enough data so regularization is needed. You are correct that when both the underlying policy function class can perfectly capture the underlying process and there is sufficient data such that regularization is not needed, then we do not need to consider shifts as we can perfectly choose the best action for each context. However many real-world processes (for example human behavior) are often complex and hard/impractical to model so often models may be misspecified. For example, in our empirical evaluation (Figure 1) we compare to a common baseline in the batch context setting that does not consider shifts (baseline-IS). In the real-world shift between cities for the Voting dataset experiment (updated Figure 1c), there is a significant shift that our method does better on. \n- **Updated Theoretical Analysis**: We updated our theoretical analysis and we have updated Theorem 2 and 3 to include rates of convergence which depend on the number of samples n. Our analysis is non-asymptotic and we show our algorithm has $\\sqrt{\\frac{log(n)}{n}}$ dependence on n.\n\n**Did the above answer your questions? We also welcome additional questions.**\n\n",
" Thank you for your thorough and helpful feedback! \n- **Sensitivity to $\\delta_c$** Thank you for the nice suggestion! We update Figure 1f and the Appendix (Figure 3a) to include plots showing the performance of the Scene and simulated Bernoulli environments for different values of the reward shift radius hyperparameter, $\\delta_c$ (0.001, 0.1, 0.25). Empirically we found performance to not vary too much based off of choice of $\\delta_c$. \n- **Other Smoothing Approaches** Thank you for the suggestion. While we focus on evaluating with the Taylor expansion approximation for smoothing in continuous contexts, we agree that other approaches such as k-NN smoothing would most likely work. We did not try this approach, but we expect it can have good performance and it would be great to consider these methods in future work. \n- **The existence of the MGF** Thank you for your question. We first quickly note that we do not need the data distributions to have a MGF. Instead we notice a component of our estimator, $E[\\exp(-R)/\\alpha_c]$ naturally defines a MGF over the variable $c_{xa} \\sim P(-R|X,A)$. We then estimate this value with a Taylor series approximation. We realize that using the term MGF may be confusing and we have changed the wording in section 5 to clarify this. The key question is then the validity of the Taylor expansion of $E[\\exp(t c_{xa})$ where $t = 1/\\alpha_c$. Because we assume rewards $R$ are bounded, it follows $c_{xa}$ is subgaussian so the MGF exists and is finite, and there is a valid Taylor expansion for $\\alpha_c > 0$. Let us know if this answers your question.\n- **Theory** Thank you for the question! We updated our theoretical analysis and show non-asymptotic convergence. We have updated Theorem 2 and 3 to include rates of convergence and we show our algorithm has $\\sqrt{\\frac{log(n)}{n}}$ dependence on n.\n- **Joint Shift** Thank you for bringing up this point. We update our discussions in Section 6.2 to provide more discussion comparing baseline-DRO and FDRO as well as the equivalence of solutions (under the heading 'Discussion: When to use FDRO vs baseline-DRO (equivalence of solutions)). To summarize: the KL-divergence chain rule decomposition: $D_{kl}(p_0(R, X)|| p(R,X)) = D_{kl}(p_0(X)||p(X)) + E_{p_0(x)} [D_{kl}(p_0(R|X)||p(R|X))]$. Recall $D_{kl}(p_0(R, X)|| p(R,X)) = \\delta$, $D_{kl}(p_0(X)||p(X)) = \\delta_x$, $D_{kl}(p_0(R|X)||p(R|X)) = \\delta_c$. We can see, when $\\delta_c$ is the same for each context, we have $\\delta = \\delta_x + \\delta_c$. When $\\delta$ of baseline-DRO is set to $\\delta_x + \\delta_c$ baseline-DRO and FDRO consider distributions the same KL distance away, with FDRO placing additional constraints on the set of distributions searched. From the decomposition, we can see, with additional stipulations, we can set a family of $\\delta_x$ and $\\delta_c$ values for FDRO to achieve the same solution of baseline-DRO for each $\\delta$ (we give details in appendix). However, for a given $\\delta_x$ and $\\delta_c$ it is not possible to find a $\\delta$ for baseline-DRO that achieves the same solution. However we do expect FRDO to be particularly useful when there are independent shifts in the reward and context distributions, which is a common setting, such as shifting locations, as the $\\delta_x$ and $\\delta_c$ are easier to set. \n- **Continuous/large contexts spaces:** Thank you for bringing up this point. We first want to clarify that the practical method proposed in Section 5 works for continuous contexts. All the empirical evaluations utilize this method and are in continuous context settings and we use the practical method for FDRO results. To answer your question about larger context dimensions and action spaces, we expect our method to work well in larger context spaces, given enough data. We note some of the datasets we consider have large feature spaces such as the Scene dataset where the context space has 294 continuous features. Due to the small dataset size, we use random projection to project this down to 3 continuous features. To illustrate our method's ability to handle larger feature dimensions, we update our appendix to include a figure (Figure 2c) where we project to 10 continuous features instead (we expect our method to work for larger dimensions as well, but found it hard to increase dimension past this due to the small amount of data). \n\n**Did the above answer your questions? We also welcome additional questions or feedback.**",
" Thank you for your helpful comments!\n- **Real-world shifts:** Thank you for the nice suggestion! We updated Figure 1c for the Voting dataset to reflect a real-world shift that occurs between cities in the dataset. We find similar results as before where FDRO's performance increases on the test distribution as we increase the robustness parameter. \n\n- **Theoretical Analysis:** Thank you for highlighting the need for updated theoretical analysis. We updated our theoretical analysis and we have updated Theorem 2 and 3 to include rates of convergence that depend on the number of samples n. Our analysis is non-asymptotic and we show our algorithm has $\\sqrt{\\frac{log(n)}{n}}$ dependence on n. We no longer utilize Lemma 1 (which was Theorem 1 of Si et. al (2020)) and our updated analysis uses different techniques. For example, Theorem 1 of Si et. al uses asymptotic techniques such as the central limit theorem to give asymptotic rates of convergence for policy evaluation while our updated analysis is non-asymptotic. We agree lower bounds would be a great avenue for future work.\n\n- **Choosing the maximum divergence:** Thank you for bringing up this point. We agree this is an important issue and indeed it is a limitation common to all distributionally robust methods. To address this further, we update Figure 1f and the Appendix (Figure 3a) to include plots showing the performance of the Scene and simulated Bernoulli environments for different values of the reward shift radius hyperparameter, $\\delta_c$ (0.001, 0.1, 0.25). Empirically we found performance to not vary too much based on the choice of $\\delta_c$. \n\n**Did the above answer your questions? We also welcome additional questions.**",
" Thank you so much for your thoughtful feedback! We are currently working on addressing your comments. To ensure we address the method you were thinking of, we were wondering if you would be able to provide a few references to the \"Factor constraining\" method mentioned. ",
" The paper studies the problem of distributionally robust batch policy optimization in contextual bandits, where the distributional shift occurs over both the context and reward distribution. Under bounded error in KL-divergence, the authors show that the value of a policy can be optimized by considering the dual objective under both distribution shifts. The authors deviate from prior work by considering the two distribution shifts separately in a factored optimization objective.\n\nThe authors theoretically show that that the estimated value of a policy converges to the true one asymptotically, and empirically demonstrate its effective against existing baselines in several real-world domains; in doing so, the authors propose how their proposed objective can be estimated outside of tabular domains (i.e. when context is a continuous vector). Strengths:\n- The paper tackles an important problem of robust batch policy optimization. The authors provide a compelling motivation for their work via voting and healthcare applications. \n- The proposed algorithm is sound and provides substantive improvements over existing baselines, including one that considers joint distribution shift rather than the context and reward shifts separately. The authors also provide an interesting approach to extend their approach beyond tabular environments via estimating the moments of the distribution.\n- The empirical evaluation is comprehensive. The authors compare their proposed approach against strong, well-tuned baselines in multiple domains that rely on real-world classification and voting data. \n\nWeaknesses:\n- The exact algorithm somewhat lacks in novelty, as it is a hierarchical application of the dual objective trick used in prior works, notably Si et al. (2020). This also applies to the analysis, as Lemma 1 matches Theorem 1 of Si et al. (2020). \n- To my knowledge, the theoretical guarantees (Theorem 2 and 3) argue mostly about asymptotic convergence. The results would be stronger if there were sample complexity bounds and showed the dependence on the number of samples n. There also is a lack of guarantees on lower-bounds.\n- Though the experiments use real-world data, the actual distribution shifts are simulated. It would be interesting to use a real-world example of a distribution shift. I was wondering if sample complexity bounds can be derived for Theorem 2 and 3 akin to Lemma 1? Specifically, is there something that prevents the authors from applying the same analysis techniques on the context distribution shift as they did on the reward distribution? The authors discuss limitations of their work in a separate Discussion section. An important one that the authors mention is how to choose the maximum divergence, which would affect how conservative the optimized policy will be. ",
" This work considers the contextual bandit problem with shifts on the distributions of both contexts and reward generators. An analysis on asymptotical optimality is provided. A practical algorithm as well as the empirical evaluations are presented. Strength:\nThis work is well-motivated. The problem of distribution shift is significant in practice.\nThe theoretical result is satisfying, even though it analyses the asymptotic behavior of the algorithm.\nThis work attempts to provide practical version of the algorithm.\n\nWeakness:\nThe proposed algorithm seems not very practical as it relies on Taylor expansion, especially when the underlying functions have many parameters.\n Sometimes the distribution shift on the context does not matter under some weak assumptions, e.g., LinUCB works on linear contextual bandit with any context distribution. Therefore, a relevant question is when we should take context distribution shift into consideration? no obvious limitation.",
" This paper proposes an improvement of distributionally robust offline learning for contextual Bandit problem. The authors formulates the robust learning problem by proposing a nested robustness criteria involving context drfit and drift in reward distribution. It builds upon the recent work but expands to the 'factored' form to make the robustness specification more fine-grained. To me this is a good piece of work that builds upon the existing work and make it more fine-grained. The motivation of the problem has been clearly articulated and the mathematical super-structure has been nicely laid out thorugh both the main text and the appendix. This is not in my field of expertise, but I think the amount of novelty and the rigor is enough for an acceptance.\n\nAt some points it feels like a jump, especially from eq 2 to eq 4 and 5. It would be nice to provide more detailed steps. Can you concretely show how your algorithm can fix the issue highlighted in line159-161? I think the authors have done a good job in listing the limitations and I have nothing further to add. ",
" This paper provides a method (Factored-DRO) that learns a distributionally robust policy over separately handled shifts in context and reward distributions. **Strengths** \n\nOverall, I find the motivations to be convincing, the methodology to be reasonable, and the paper to be generally well-written. \n\n\n**Weaknesses** \n\nI would be interested in a more in-depth conceptual, theoretical, and practical comparison between FDRO and its closest point of comparison baseline-DRO (Si et al). For example, while FDRO has the advantage of being able to separate out shifts in context and reward distributions, by requiring more parameters to be set, is it more sensitive to improperly set parameters, i.e. does the error compound? Does this imply that FDRO may be less advantageous to use for users who have less information about the distribution shifts in their application of interest? \n\nPlease see my questions below. - Why is it the baseline-DRO can be overly conservative compared to FDRO? Are there situations where FDRO can also be overly conservative (or overly optimistic)?\n\n- Is FDRO guaranteed to be always better than baseline-DRO in test, and why? If not, is there a situation where we can expect baseline-DRO to do better?\n\n- Is it possible to set $\\delta$ and $\\delta_x, \\delta_c$ such that baseline-DRO and FDRO will have equivalent performance?\n\n- As the authors mention, one limitation common to DRO methods is setting the $\\delta$ parameter. Since FDRO has two parameters to set, is it more sensitive to improperly set values? E.g. if $\\delta_x$ and $\\delta_c$ in FDRO are set too large, does the error compound, leading to worse performance, compared to baseline-DRO when $\\delta$ is similarly set too large?\n\n- In this light, does this imply that FDRO may be less advantageous to use for users who have less information about the distribution shifts in their application of interest? Should the user be more or less conservative in setting the parameter $\\delta$ in FDRO vs baseline-DRO?\n\n- Why are finite sample guarantees given for reward shift but asymptotic for context shift? Is it possible to obtain finite sample guarantees for policy evaluation and learning (Theorems 2 and 3) versus convergence of reward shift estimates, and if not, what is the barrier? Aside from the points addressed in my comments above, the authors have adequately addressed the limitations and impacts of their work. ",
" This paper proposes an algorithm to learn a contextual bandit policy in the batch setting when there is a distributional shift between training and deployment. The main motivation of the proposed approach is the benefit of constraining the factors of the distribution of the context and the reward. \n\nBased on the existing theoretical work on the minimax approach to DRO, the paper proposes a procedure for policy evaluation and improvement with convergence analysis. With other estimation details filled, FDRO is empirically evaluated against other baselines highlighting the strength of the proposed approach. Strengths\n\n- The proposed policy iteration is well-motivated by a previously established theoretical work.\n- Also, the convergence analysis of the proposed method is provided.\n- Empirically it is demonstrated that the factor constraining generally does not harm the policy iteration compared to the joint constraining except for a slim edge case (the same train/test).\n\nWeaknesses\n\n- Rather than calling them weakness, I put relevant questions in below box.\n- There are some typos\n * in eq.(6) $\\frac{-R}{\\alpha_c}$ subscript $j$ of $R$ missing\n * at line 290, $V^* = \\min_{\\pi} V(\\pi)$ which seems to have to be $\\max_{\\pi}$, \n * in line 102 'We assume we the', \n * in eq.(4), closing curly/square parenthesis are swapped, etc. \n - It seems that MGF approximation seems working well in practice. However, (considering a bit of theoretical flavor of this paper) in theory, MGF is not always guaranteed to exist. Then if the nominal (true) distribution has the MGF, could it be possible to say that a distribution of $\\epsilon$ KL-ball of the nominal distribution had MFG?\n- For continuous contexts, binning approach has been proposed. I was wondering if there is any other approach the author tried, for example, k-NN like smoothing, estimating reward distribution for a given context using (R,C,A) triplets from nearby contexts.\n- When there is a joint shift in the context and the reward, how can FDRO be compared with the joint constraining, e.g. baseline-DRO?\n- Does theorem 2 hold uniformly for $\\pi$? Since the uniform convergence is mentioned on line 208, it would be better to clarify this.\n- Factor constraining is basically considering KL-ball for each factor from some factorization of the original full distribution. In the full KL-ball of the same factorization, it is KL-distance on context distribution and expected KL-distance on context-action conditioned reward distribution. Then this can motivate a variation of FDRO, in which not setting the uniform (minimax) bound of KL-distance on context-action conditioned reward distribution, expected (Bayesian) bound of KL-distance on reward distribution. How would this be compared with the FDRO, any thought on this?\n- In contrast to context shift, reward shift $\\delta_c$ is fixed to $0.001$. How is the sensitivity to this hyperparameter? The main motivation seems that the training tractability of factored constraining over joint constraining. However, the benefit of the current approach does not seem to be easily translated into continuous contexts making the estimation of reward distribution difficult. Still, the benefit of FDRO in discrete contexts is noticeable. Maybe its extension of the currently proposed binning approach for continuous contexts can be future work. In relation to this limitation, I am curious about the extent of the FDRO in discrete reward settings, for example, in the case where there are a larger number of discrete contexts and discrete actions, how FDRO would perform. Rather than asking to perform additional experiments for this, I would be happy if the authors can share their thought and previous analyses on this."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3,
3
] | [
"xE2uyRVReonm",
"igXFAJr2UK",
"F4FPeMp4a5Y",
"nips_2022_k6WzeLZjxuP",
"QQ2tH3Wa6Uo",
"ZzDEQ-KIaNH",
"qcjuBHyOtyX",
"eUnDPEPDJpq",
"UujdRhgOhyO",
"eUnDPEPDJpq",
"nips_2022_k6WzeLZjxuP",
"nips_2022_k6WzeLZjxuP",
"nips_2022_k6WzeLZjxuP",
"nips_2022_k6WzeLZjxuP",
"nips_2022_k6WzeLZjxuP"
] |
nips_2022_25XIE30VHZE | SecureFedYJ: a safe feature Gaussianization protocol for Federated Learning | The Yeo-Johnson (YJ) transformation is a standard parametrized per-feature unidimensional transformation often used to Gaussianize features in machine learning. In this paper, we investigate the problem of applying the YJ transformation in a cross-silo Federated Learning setting under privacy constraints. For the first time, we prove that the YJ negative log-likelihood is in fact convex, which allows us to optimize it with exponential search. We numerically show that the resulting algorithm is more stable than the state-of-the-art approach based on the Brent minimization method. Building on this simple algorithm and Secure Multiparty Computation routines, we propose SECUREFEDYJ, a federated algorithm that performs a pooled-equivalent YJ transformation without leaking more information than the final fitted parameters do. Quantitative experiments on real data demonstrate that, in addition to being secure, our approach reliably normalizes features across silos as well as if data were pooled, making it a viable approach for safe federated feature Gaussianization. | Accept | This paper studies data preprocesing in the federated learning setting.
It proposes a simple and elegant algorithm for performing a Yeo-Johnson (YJ) power transform on univariate numerical data. This is a nonlinear transform intended to make the data more like a Gaussian.
The paper shows that the likelihood objective is convex and that we can sign its derivative using linear aggregates, which can be performed using secure multiparty computation. This permits us to optimize the transformation using binary search / exponential search. In the honest-but-curious setting, this exponential search does not reveal any superfluous information.
Overall, the reviewers thought this was a valuable contribution and that the paper is well-written and technically sound and that the experimental evaluation is adequate.
However, the reviewers did identify some limitations and directions for further work:
- *Trust Model.* The paper only considers the honest-but-curious setting. What goes wrong if we consider a more powerful adversary? Can we provide further guarantees?
- *Leakage via final output.* There is still the possibility of leaking sensitive information via the final parameters $\lambda$, $\mu$, & $\sigma$. This is beyond the scope of MPC guarantees, but it raises the question of whether these techniques can be combined with tools like differential privacy to address this concern.
- *Efficiency.* The overhead of the system is still quite high in terms of rounds of interaction and total data transfer. The paper appropriately discusses this limitation.
- *Heterogeneous data.* The paper discusses data heterogeneity, but it is unclear how relevant this is. The algorithm only uses linear aggregates over the coalition, so it shouldn't matter if the data is homogeneous or heterogeneous. Figure 3 compares these two settings, but the results appear to be exactly the same. As such, this figure could be removed from the camera ready version.
On balance, this paper seems appropriate for acceptance at NeurIPS. Data preprocessing is an important task (that is often underappreciated) and the paper proposes and interesting method for doing this in the federated learning setting. | train | [
"Z9NZorGKg15",
"AEERKlWjuC",
"ERUN2zvYqI",
"SU7O2joWkku",
"KNjX92_lCKk",
"ms8zqPx-92h",
"_7YB2GYO2np",
"RJAgRPulBsX",
"aWJtiUnYaYT",
"yu9wtI_fnGd",
"7lSIWWj2H-M",
"Ex7JhV9lkqX",
"MuZjO20dQ9C",
"NTZ0iVfGk4",
"e4ZsMUrZsF",
"XO1-_jvcNhA",
"wvfxxiPZL57",
"GWEx3nQzeWj",
"Gh6_OpSmxnR",
"jN0HJno7gCx",
"MdU9DtLjATH"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer W31E,\n\nThe discussion deadline is approaching, and we would like to know whether our detailed answer successfully adresses your remarks and questions. If it is the case, we would appreciate if you could reconsider your evaluation. Otherwise, we would be happy to discuss further with you any remaining concerns.\n\nRespectfully,\n\nThe authors",
" Dear reviewer Kq6t,\n\nWe thank you for considering our answers and for increasing the score of your review.\n",
" Dear AC,\n\nYou are totally right. The binary search of secFedYJ can be replaced by a k-ary search. As long as only the sign of the derivative of the log-likelihood is revealed, the proposition 4.1 holds. Indeed, it is straightforward to extend the pseudo-code of $\\mathcal{F}$ provided in appendix F of the revised version, for k-ary search.\n\nWe would like to thank you for this suggestion and we would be happy to add the possibility of replacing the binary search to k-ary search in the paper before its publication.\n",
" I appreciate the author's consideration of my comments.",
" I would like to thank the authors to answer my questions and address some of my concerns. I would increase my score accordingly and take the response into consideration when discussing with the other reviewers and the AC.",
" Thanks for answering my question.\n\n> However, it would be important to check that no further information is leaked on the data in such a protocol.\n\nIt seems to me that the same argument that is used for exponential/binary search should apply to ternary, quaternary, or k-ary search.",
" Dear Area Chair,\n\nWe would like to thank you for your question and your suggestion. We agree that it would be interesting to see if the clients could simultaneously compute various values of $\\Delta$ in SMC for different values of $\\lambda$ to speed up the algorithm. However, it would be important to check that no further information is leaked on the data in such a protocol. We did not include such a study in the scope of our work as our work did not focus on the accuracy/speed trade-off (apart from Fig 2 left, see comment below), but we would be happy to mention such an idea in the conclusion of our paper, leaving it to further work.\n\nAs we justify in the paper (line 266-270), we think that the overall complexity of our method is compatible with real-world cross-silo FL projects. \n\nBesides, as mentioned line 263 in our work, secFedYJ can be applied in parallel to different features. In other words, only 726 rounds of communication are performed regardless of the dimension of the data.\n\nMoreover, as shown in Fig 2 (left), taking $t_\\mathrm{max} = 40$ is quite conservative and can be reduced to decrease the complexity. Indeed, with only $t_\\mathrm{max} = 15$, the median relative error between the exponential search and the scikit-learn method based on Brent minimization method is $10^{-4}$. \n\nFinally, the number of communication rounds (726) mentioned is due to the underlying SMC routines used. New SMC protocols are regularly designed and published, which might decrease this number of rounds.\n\nWe recall that the exponential search used in secFedYJ has linear convergence while the Brent minimization method traditionally used has a supra-linear convergence. The reason why we choose exponential search instead of Brent minimization method is:\n- It is guarantee to converge to the right value as it leverages the convexity of the negative log-likelihood, while Brent minimization method fails in some cases as shown in Fig 2 (right)\n- It only requires to compute and reveal the sign of the derivative of the log-likelihood, which leak less information. Brent methods requires to compute the exact value of the log-likelihood at each step\n\n",
" Line 261 says \n> Performing SECUREFEDYJ with tmax = 40 takes 726 rounds of communication\n\nIs it possible to reduce the round complexity?\n\nFor example, is it possible to partially parallelize the search? If so, what would be the tradeoff in terms of communication per round?",
" I thank the authors for addressing my questions, and clarifying some aspects of their work. \n\n> The paper does not offer insights that go beyond the immediate question of the YJ transformation.\n\nThis is a factual description of the content of the paper. The results apply to the YJ transformation, but the paper does not suggest any other transformations or functions that this approach could be used for. ",
" Dear authors,\nThank you for your responses and the revision. These will be considered during the discussion.\nRegards,\nYour AC",
" We would like to thank all reviewers for their time and useful remarks. We provided a detailed answer to each reviewer in a separate comment and submitted a revised version of the paper taking into account the reviewers’ remarks. We would be happy to further exchange with the reviewers during the discussion period.",
" ## Remark 3:\nThe authors highlight the heterogeneity challenge in the introduction but I don't see how they customize their algorithm to deal with heterogeneity. If I understand correctly, the application itself is not sensitive to heterogeneity. For example, I cannot tell any difference between Figure 3(a) and 3(b). Also linked with: Please clarify how the algorithm is customized to deal with heterogeneity if any?\n\n## Answer to remark 3:\nWe claim that the outcome of secFedYJ does not depend on how the data is distributed across the clients, and that its result would be the same if all the data were pooled in a central server. Therefore, we do not have to customize secFedYJ when facing an heterogeneous data distribution across clients. This point is illustrated in Fig 3(a) and 3(b), in which the same algorithm is used on data that is either homogeneously or heterogeneously distributed across the center. In particular, note that the result of the algorithm is the same in both cases.\n\n\n## Remark 4\nProposition 4.1 is correct but the proof is too sketchy.\nPlease refine the proof for Proposition 4.1 (for example, simulation-based proof).\n\n## Answer to remark 4\nWe thank the reviewer for this excellent remark. In order to refine this proof, we added in Appendix F an explicit pseudo-code to build such a function F. We numerically checked that this function F recovers the intermediate quantities from $\\lambda_*$ as stated by the proof.\nTo use SMC, the data points need proper discretization. The authors do not mention any details about the discretization process like the hyper-parameters.\n\n## Remark 5\nPlease add details about discretization for SMC.\nHave the authors considered discretization in SMC when performing the empirical evaluation?\n\n## Answer to remark 5\nIndeed, as pointed out by the reviewer, all quantities are discretized when using SMC. All the floating quantities are discretized in our work using fixed-point representation as described in appendix D.2. The discretization is parametrized by $l$ and $f$ which are respectively the total number of bits to encode each quantity, and the number of bits dedicated to the decimal part of float number. We show the impact of this discretization on the performance of our algorithm in figure 3 where the x-axis corresponds to the discretization parameter $f$.\n\n## Remark 6\nPlease validate the numerical stability advantage on more real-world datasets.\n\n## Answer to remark 6\nWe added in the appendix E.4 a table checking whether this stability appears on more datasets. Out of 484 features, we identified 5 instabilities.\n\n## Remark 7\nSome minor points: 7.1 There are many references to Appendix C. I recommend the authors to refer to subsections in Appendix C which will make the flow clearer. 7.2 The secure multiparty computation background is almost taking up a whole page. Consider moving it to the appendix.\n\n## Answer to remark 7\nWe thank the reviewer for this suggestion. There also was a typo in appendix C where two paragraphs were inverted. We fixed it, and we clarified the reference to appendix C.\n",
" We would like to thank anonymous reviewer Kq6t for their review. We answer below all their remarks and questions, and we modify the paper accordingly. We think that thanks to the reviewer remarks, the paper is now clearer, and we would like to politely ask the reviewer to re-evaluate the revised paper. We are open to further discussion with the reviewer regarding the revised paper.\n\n## Remark 1\n\nFedYJ requires multiplication on secret shared values so benchmarking the system performance (how long does it take to perform the computation and communication) is a necessity. (Also linked with: Please include empirical evaluation of computation and communication cost on real clusters.)\n\n## Answer to remark 1\nFigure 3 displays the results of a numerical benchmark of the SMC routines on a single computer, thereby measuring computation time. Only the cost due to network communication was not taken into account. However, in order to provide a reasonable communication benchmark, we provide in appendix D4 both the algorithmic complexity of the full SMC routine, and the number of messages sent on the network when performing secFedYJ. Using the bandwidth values of a real-life cross-silo FL project, we estimated at the end of section 4 line 261 of the revised version (line 264 of the original version), that:\n> In this context, the execution of secFedYJ with $t_\\mathrm{\\max} = 40$ on $p$ features would take about $726\\times 20\\ \\mathrm{ms} \\simeq 15\\ \\mathrm{s}$ due to the communication overhead, in addition to $p\\times8\\ \\mathrm{Mb} / 1\\ \\mathrm{Gbps} \\simeq 8 p\\ \\mathrm{ms}$ due to the bandwidth. This shows that secFedYJ is indeed a viable algorithm in a real-world scenario.\nWe leave a large-scale evaluation of our method on a real cluster to future work. \n\n\n\n## Remark 2\nThe threat model is too weak, which makes the whole setting degenerate to distributed learning instead of FL.\n## Answer to remark 2\nIn this work, we consider, cross-silo FL with a honest-but-curious threat model. Such a threat model is relevant to consider in FL for the following reasons:\n\n- First of all, projects including few organizations that collaborate to train a model together, is indeed in the scope of cross-silo Federated Learning. Indeed, as stated by “Advances and Open Problems in Federated Learning” [1]: Since the term federated learning was initially introduced with an emphasis on mobile and edge device applications interest in applying FL to other applications has greatly increased, including some which might involve only a small number of relatively reliable clients, for example multiple organizations collaborating to train a model. We term these two federated learning settings “cross-device” and “cross-silo” respectively.\n\n- In cross-silo FL, participants are often large institutions or companies dealing with regulated data, which undergo frequent audits and desire to maintain reputation. An example of such a setting is the MELLODDY project [4], where each partner runs a code that has been previously audited. The honest-but-curious model is therefore a relevant threat model here.\n- Besides, such a threat model have been often considered and studied in several related cases:\nIn [3], the author stresses the relevance of this model (also called semi-honest): “in many settings, one may assume that although the users may wish to cheat, they actually behave in a semi-honest way” and underlines interest of this semi-honest model from both a theoretical and practical standpoint, given the difficulty to hack code: “In addition to the methodological role of semi-honest parties in our exposition, they do constitute a model of independent interest. In particular, deviation from the specified program, which may be invoked inside a complex software application, is more difficult than merely recording the contents of some communication registers”.\n- In [2], the authors take the example of a Smart grid project where the energy provider has access to the private energy consumption of users. This work states that: “However, a legitimate participant in the protocol such as the energy supplier could not realistically be modeled as a DY [ = malicious] adversary. In reality, various factors limit the capabilities of the energy supplier including regulations, audits, oversight and desire to maintain reputation.” [...] “We therefore propose to model this agent as a semi-honest or honest-but-curious”\n\n\n[1] Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., ... & Zhao, S. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2), 1-210.\n\n[2]Andrew Paverd, Andrew Martin, and Ian Brown. Modelling and automatically analysing privacy properties for honest-but-curious adversaries. Tech. Rep, 2014.\n\n[3] Foundations of cryptography, Oded Goldreich, Volume 2, page 619, sec 7.2.2 \n\n[4] MELLODDY project, www.melloddy.eu\n",
" We would like to thank anonymous reviewer 5G9f for their review. We answer below all their remarks and questions.\n\n## Remark 1\n\nThe paper does not offer insights that go beyond the immediate question of the YJ transformation.\n\n## Answer to remark 1\nWe are not sure what the reviewer meant by this remark. The goal of this paper is indeed to propose a secure adaptation of the YJ transformation, which is widely applied in e.g., applications to genomics, to the FL setting.\n\n## Remark 2\nThe search algorithm requires some operations which are potentially costly to perform in the SMC setting.\n\n## Answer to remark 2\nFirst of all, our numerical experiments based on MPyC reproduces all the SMC computations on a single computer. We therefore measure computation costs, and only the cost of communicating over the network is not simulated.\nBesides, we provide in appendix D.4 the communication cost and the computational complexity of the different SMC operations performed in our algorithm using the SMC routines from [36] and implemented in MPyC. Using these numbers, we compute at the end of section 4, in the paragraph starting line 261 in the revised version (line 264 in the first version), the cost of the whole routine on real cross-silo FL project. We conclude therefore in the paper that:\n\n> In a realistic cross-silo FL setting [...] the execution of secFedYJ with $t_\\mathrm{\\max} = 40$ on $p$ features would take about $726\\times 20\\ \\mathrm{ms} \\simeq 15\\ \\mathrm{s}$ due to the communication overhead, in addition to $p\\times8\\ \\mathrm{Mb} / 1\\ \\mathrm{Gbps} \\simeq 8 p\\ \\mathrm{ms}$ due to the bandwidth.\n\n\n## Remark 3\nThe paper glosses over the technical details of applying the algorithm in the SMC setting: there are some moderately complex polynomial operations on data that require multiplications / exponentiations, but the cost of these is not discussed in much detail -- Appendix D gives only a very high level overview of basic secret sharing. Moreover, the paper does not indicate where the challenge lies: once algorithm 1 is presented, it seems that Algorithm 3 is very similar except that data access is performed through secret sharing.\n\n## Answer to remark 3\n- Regarding the cost of the SMC routines, we refer to our answer of the previous remark.\n\n\n- We thank the reviewer for pointing out that the challenges and the details of Algo 3 were not detailed enough. We modified the pseudo-code and added the appendix D.5 to provide more details about the role of each party and how SMC is used in Algo3. In this algorithm, the server only plays the role of the orchestrator. All the quantities in bracket are shared-secret quantities computed across the client using SMC routines. The new version of the article explicitly details the roles of each party.\n\n## Remark 4\nThe communication overhead is high (8MB per client per feature), and there is no discussion of whether this could be reduced or traded off -- this is why the method is motivated in the cross-silo rather than cross-device FL setting.\n\n## Answer to remark 4\nWe consider the cross-silo FL where a good bandwidth exists between actors. secFedYJ is a solution that can provide a pooled-equivalent result with reasonable communication cost. In this setting, trying to reduce the communication costs might incur a drop in performance, which is not necessary given the setup.\nWe agree that in other settings, such as cross-device FL, adjusting the trade-offs would be very relevant. However, this is not the scope of the paper, and we leave such a question for future work.\n\n## Remark 5\nAn alternative approach in the cross silo setting would before each client to apply the YJ transformation locally to their data, on the assumption that the distribution is common across clients. Can you show examples that demonstrate where this heuristic will fail?\n\n## Answer to remark 5\nIndeed, under the restrictive assumption that the distribution is common across all clients, applying local YJ transformations will yield the expected result. However, this heuristic would fail if clients have *heterogeneous* data distributions. secFedYJ can be applied in both cases, without any check on the data distribution as its result does not depend on the way the data is distributed across the clients.\n",
" ## Remark 4:\nThe proposed method will destroy the heterogeneity of Federated Learning: After the YJ transformation process, it seems like every client will have the same distribution since they share the same $\\lambda_*, \\mu_*, \\sigma_*$. This poses a risk to FL: if the adversary knows the data distributions of every client (since they have the same distribution), the adversary will quickly deploy the attack of the tails [1]. Moreover, since the threat model is \"honest-but-curious\", using the YJ transformation will pose a privacy risk to the clients' data since the server can deploy the membership inference attacks [2]. (Also linked to the limitation: The proposed method will destroy the heterogeneity of Federated Learning.)\n## Answer to remark 4:\n- What the reviewer means by “destroying the heterogeneity of Federated Learning” is not clear to us, nor what the benefits of having clients with heterogeneous distributions would be. In particular, most Federated Learning strategies aim to mitigate the detrimental effects of heterogeneity on the efficiency of the training process. \n\n- Regarding shared parameters: sharing the same $\\lambda_*, \\mu_*, \\sigma_*$ does not imply that every client will have the same distribution, but only that the same transformation will be applied to each client. In particular, applying a common YJ transformation to different distributions leads to different distributions. \n- Regarding the attack of the tails, first of all, the honest-but-curious setting excludes the possibility of backdoor attacks such as the attack of the tails described in [1].\n- Besides, applying SecFedYJ neither increases or decreases the possibility of performing the attack of the tails mentioned by the reviewer. This attack of the tails consists in implementing a backdoor in FL on data that are unlikely to appear in the training dataset, as they belong to the tail of the distribution. Indeed, we believe that the attack of the tails is not adapted to our case, as we mainly consider tabular data, whereas this attack is principally designed for images. Besides, the central server only know the marginal distribution columns by column, and not the overall distribution of the dataset.\n- Regarding the membership attack and taking into account the points raised above, it is unclear to us how knowing the $\\lambda_*, \\mu_*$ and $\\sigma_*$ parameters would help carrying out this attack, now that we have clarified that clients would not share the same distribution after applying the Yeo-Johnson preprocessing. Could the reviewer please elaborate on this point?\n\t\t\n\n\n## Remark 5:\nIn Algorithm 3, who will run the algorithm, and what is the job of the clients and the server?\n\n## Answer to remark 5:\nWe thank the reviewer for pointing out that the roles of the different parties in Algo 3 were not detailed enough. We clarified the pseudo-code and added a more detailed explanation of the respective roles of the clients and servers in Appendix D.5. In this algorithm, the server only plays the role of the orchestrator. All the quantities in brackets are shared-secret quantities computed across the client using SMC routines. The new version of the article explicitly details the role of each party.\n\n## Remark 6\nWhat is the reason you only consider cross-silo FL since, in my opinion, this method can be generalized to all kinds of FL.\n\n\n## Answer to remark 6\nWe thank the reviewer for this suggestion.\nSuccessfully running a protocol in SMC requires few participants and a stable connection among them. Both of these conditions are met in cross-silo FL. Further, the honest-but-curious model is more adapted to cross-silo, as justified above. \nIn any case, we agree that it would be an interesting direction to investigate whether this method can be adapted to cross-device, especially in the case of unstable connections across clients. We leave this question to future works.\n\n\n## Remark 7\nWhat is the advantage of the proposed method for FL?\n\n## Answer to remark 7\n This method ensures that a *common* preprocessing step is performed in each center, which is crucial in e.g. genomics applications in a federated setting. In contrast, performing independent YJ steps in each center would introduce undue heterogeneity, which would be detrimental to downstream tasks as shown by our numerical experiments Figures 5 and 6.\n",
" We would like to thank anonymous reviewer W31E for their time and their thorough review. We answer below all their remarks and their questions, and we modified the paper accordingly. We think that thanks to the reviewer's remarks, the paper is now clearer, and we would like to politely ask the reviewer to re-evaluate the revised paper. We are open to further discussion with the reviewer regarding the revised paper.\n\n## Remark 1:\nUnclear threat model: can not understand the purposes or capabilities of the threat model.\n## Answer to remark 1:\n- In the following we explain the threat model and justify its relevance in our case. We added a sentence in the corresponding paragraph of the article to highlight the relevance of the honest-but-curious model in cross-silo FL.\nWe consider an honest-but-curious threat model, as described in [1] (which we cite in our paper). In this setting, all participants follow the protocol without any modification, but are free to try to infer information from the intermediate quantities exchanged throughout the protocol.\n- Such a threat model has been often considered and studied in the literature. In [2], the author stresses the relevance of this model (also called semi-honest): “in many settings, one may assume that although the users may wish to cheat, they actually behave in a semi-honest way” and underlines interest of this semi-honest model from both a theoretical and practical standpoint, given the difficulty to hack code: “In addition to the methodological role of semi-honest parties in our exposition, they do constitute a model of independent interest. In particular, deviation from the specified program, which may be invoked inside a complex software application, is more difficult than merely recording the contents of some communication registers”.\n- In [1], the authors take the example of a Smart grid project where the energy provider has access to the private energy consumption of users. This work states that: “However, a legitimate participant in the protocol such as the energy supplier could not realistically be modeled as a DY [malicious] adversary. In reality, various factors limit the capabilities of the energy supplier including regulations, audits, oversight and desire to maintain reputation.” [...] “We therefore propose to model this agent as a semi-honest] or honest-but-curious (HBC)”\nInsofar as cross-silo FL is often applied in large institutions or companies, in regulated sectors such as banking, insurance or healthcare, the capabilities by the actors are bound in the same fashion as in the smart grid example above. For instance, in the MELLODDY project [3], which is an example of cross-silo FL of interest for this paper in healthcare, each partner is a large company and runs a code that has been previously audited.\n- Therefore, an honest-but-curious behaviour is a realistic model for this setting.\n\n[1]Andrew Paverd, Andrew Martin, and Ian Brown. Modelling and automatically analysing privacy properties for honest-but-curious adversaries. Tech. Rep, 2014.\n[2] Foundations of cryptography, Oded Goldreich, Volume 2,, page 619, sec 7.2.2 \n[3] MELLODDY project, www.melloddy.eu\n\n## Remark 2:\nCan you please explain the log-likelihood in Eq. 1? I followed reference 50 in the text, but there's no information in the reference.\n\n## Answer to remark 2:\nWe thank the reviewer for this excellent and insightful remark. Indeed, neither reference 50 nor the original Box-Cox paper does provide the derivation of the log-likelihood. In particular the term $(\\lambda-1)\\mathrm{sign}(x_i) \\log (|x_i|+1)$ can be surprising. This term comes from the determinant of the Jacobian of the transformation $\\Psi(\\lambda, x_i)$. We added in the appendix A.1 that provides the full derivation of the log-likelihood.\n\n## Remark 3:\nThere's no theoretical guarantee of the clients' data privacy leakage from the output of the proposed method: since \"the fitted parameter \\lambda_* contains all the information revealed during the intermediate steps,\" whether it can leak any information and whether we should protect it?\n\n## Answer to remark 3:\nThere's no theoretical guarantee of the clients' data privacy leakage from the output of the proposed method.\nThis is a limitation of this work, we do not study the privacy leak induced by the final lambda. Our claim is simply that no further information is leaked, as is standard in secure multiparty computation approaches. We added a specific sentence in the limitation paragraph to clarify this point, and leave to future work the study of the amount of information leaked when sharing this parameter.",
" We would like to thank the anonymous reviewer wXmX for their time and positive review. We answer their two main remarks: \n\n- Remark 1: The definition of SMC in Section 2 is more limited than the generally used one. The authors only consider computation on local shares but not computation involving communication (line 130). General multi-party computation crucially involves a protocol without which only very limited functionality could be achieved.\n- Answer: We agree that the example originally provided in this section only considers the case where the SMC routine only involved local operations, while most non-additive operations require rounds of communication across clients. Indeed, the goal of this section is to introduce the idea of SMC to readers not familiar at all with SMC. We modified the two sentences starting line 130 to provide a more accurate presentation of SMC.\n\n- Remark 2: The evaluation of the secure computation (Appendix D.3) does not correspond to the state of the art. This might suffice given the low complexity, but I would like to note that there are more efficient ways for secure comparisons (Escudero et al., Crypto 2020).\n- Answer: We thank the reviewer for this suggestion, of which we were not aware. We added this reference in Appendix D.3.",
" The paper proposes a preprocessing protocol based on multi-party computation that works independently of how the data is skewed among a few parties.\n The algorithm makes clever use of SMPC to minimize cost while preserving security. While it reminds me of secure median computation (Aggarwal et al., J. Cryptol. 2010), it is clearly worthy of independent publication. The interesting aspect is the following: The desired algorithm is iterative. This iteration is replicated in the secure protocol as follows: In every step, there is a relatively simple computation, the result of which (one bit) is revealed. This bit is then used for the next step in the manner of bisectional search. The revealing operation raises the question of the security of doing so. The security is given by the fact that the final result (which is revealed anyway) implies all the intermediate results, and thus, the adversary does not learn any extra information as long as all parties follow the protocol.\n\nThe definition of SMC in Section 2 is more limited than the generally used one. The authors only consider computation on local shares but not computation involving communication (line 130). General multi-party computation crucially involves a protocol without which only very limited functionality could be achieved.\n\nThe evaluation of the secure computation (Appendix D.3) does not correspond to the state of the art. This might suffice given the low complexity, but I would like to note that there are more efficient ways for secure comparisons (Escudero et al., Crypto 2020).\n n/a\n yes",
" This paper proposes an exponential search method for YJ transformation in centralized and federated learning settings. It proved that the negative log-likelihood of YJ transformation is strictly convex with respect to \\lambda. Building upon that, they provide a method to do an exponential search to find the best \\lambda, \\mu, and \\sigma for YJ transformation. The paper also applies the Secured Multiparty Computation to provide a privacy-preserving guarantee in the process of Federated YJ transformation. Strengths:\n- Provide a novel exponential search method for YJ transformation.\n- The proposed method achieves high accuracy compared to Brent minimization, which is widely used.\n- The proposed method has better numerical stability than existing methods (Brent minimization).\n\nWeakness:\n- Unclear threat model: can not understand the purposes or capabilities of the threat model.\n- The proposed method will destroy the heterogeneity of Federated Learning: After the YJ transformation process, it seems like every client will have the same distribution since they share the same \\lambda_*, \\mu_*, \\sigma_*. This poses a risk to FL: if the adversary knows the data distributions of every client (since they have the same distribution), the adversary will quickly deploy the attack of the tails [1]. Moreover, since the threat model is \"honest-but-curious\", using the YJ transformation will pose a privacy risk to the clients' data since the server can deploy the membership inference attacks [2].\n- There's no theoretical guarantee of the clients' data privacy leakage from the output of the proposed method: since \"the fitted parameter \\lambda_* contains all the information revealed during the intermediate steps,\" whether it can leak any information and whether we should protect it? \n\n[1] Wang, Hongyi, et al. \"Attack of the tails: Yes, you really can backdoor federated learning.\" Advances in Neural Information Processing Systems 33 (2020): 16070-16084.\n\n[2] Shokri, Reza, et al. \"Membership inference attacks against machine learning models.\" 2017 IEEE symposium on security and privacy (SP). IEEE, 2017. - In Algorithm 3, who will run the algorithm, and what is the job of the clients and the server?\n- What is the reason you only consider cross-silo FL since, in my opinion, this method can be generalized to all kinds of FL.\n- Can you please explain the log-likelihood in Eq. 1? I followed reference 50 in the text, but there's no information in the reference. \n- What is the advantage of the proposed method for FL? - Unclear threat model\n- The proposed method will destroy the heterogeneity of Federated Learning.\n- There's no theoretical guarantee of the clients' data privacy leakage from the output of the proposed method.",
" The authors propose ExpYJ, a new algorithm for feature Gaussianization which beats Brent algorithm in terms of numerical stability but asymptotically slower than Brent algorithm. The authors them combine ExpYJ with secure aggregation to obtain FedYJ. The authors propose a federated feature Gaussinaization algorithm by first proposing a new optimization algorithm to optimize the hyper-parameters for YJ transformation and then combining it with secure aggregation in FL.\n\nStrength:\n1. The paper is well written and easy to follow.\n2. The authors evaluate their algorithm on several downstream applications.\n3. The improvement on numerical stability is a good contribution. However, I think evaluation on only one dataset is not enough. I encourage the authors to validate this advantage on more datasets.\n\nWeakness:\n1. The technical novelty is limited. The proof of strict convexity is a contribution, but with the convexity of Box-Cox transformation, the proof is not too challenging.\n2. FedYJ requires multiplication on secret shared values so benchmarking the system performance (how long does it take to perform the computation and communication) is a necessity.\n3. The authors highlight the heterogeneity challenge in the introduction but I don't see how they customize their algorithm to deal with heterogeneity. If I understand correctly, the application itself is not sensitive to heterogeneity. For example, I cannot tell any difference between Figure 3(a) and 3(b).\n4. The threat model is too weak, which makes the whole setting degenerate to distributed learning instead of FL.\n5. Proposition 4.1 is correct but the proof is too sketchy.\n6. To use SMC, the data points need proper discretization. The authors do not mention any details about the discretization process like the hyper-parameters.\n7. Some minor points: 7.1 There are many references to Appendix C. I recommend the authors to refer to subsections in Appendix C which will make the flow clearer. 7.2 The secure multiparty computation background is almost taking up a whole page. Consider moving it to the appendix. 1. Please clarify how the algorithm is customized to deal with heterogeneity if any?\n2. Have the authors considered discretization in SMC when performing the empirical evaluation?\n\n 1. Please include empirical evaluation of computation and communication cost on real clusters.\n2. Please add details about discretization for SMC.\n3. Please validate the numerical stability advantage on more real-world datasets.\n4. Please refine the proof for Proposition 4.1 (for example, simulation-based proof).\n5. Please clarify how the algorithm is customized to deal with heterogeneity if any?",
" The paper addresses the Yeo-Johnson feature transformation, and how it can be computed in the federated setting, where privacy and security restrict how information can be shared. The paper demonstrates the convexity of the log-likelihood function, which enables an efficient search procedure in the multiparty computation model that does not leak any information beyond the revealed answer. Experiments are included that show the behavior inpractice. Strengths \n\n- There is growing interest in secure machine learning that goes beyond the core training part of the ML pipeline, and provides security for data preparation in addition\n\n- The paper shows a nice result about the convexity of the Yeo-Johnson transformation which enables its secure computation\n\n- The proof of minimum leakage (proposition 4.1) is a very nice contribution, and clearly makes the argument that there is no overall leakage from the algorithm\n\nWeaknesses\n\n- The paper does not offer insights that go beyond the immediate question of the YJ transformation. \n\n- The search algorithm requires some operations which are potentially costly to perform in the SMC setting\n\n- The paper glosses over the technical details of applying the algorithm in the SMC setting: there are some moderately complex polynomial operations on data that require multiplications / exponentiations, but the cost of these is not discussed in much detail -- Appendix D gives only a very high level overview of basic secret sharing. Moreover, the paper does not indicate where the challenge lies: once algorithm 1 is presented, it seems that Algorithm 3 is very similar except that data access is performed through secret sharing. \n\n- The communication overhead is high (8MB per client per feature), and there is no discussion of whether this could be reduced or traded off -- this is why the method is motivated in the cross-silo rather than cross-device FL setting An alternative approach in the cross silo setting would before each client to apply the YJ transformation locally to their data, on the assumption that the distribution is common across clients. Can you show examples that demonstrate where this heuristic will fail? \n\n The conclusions section has a nice discussion of limitations and future work that is appropriate for the paper. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
4
] | [
"Gh6_OpSmxnR",
"KNjX92_lCKk",
"ms8zqPx-92h",
"wvfxxiPZL57",
"Ex7JhV9lkqX",
"_7YB2GYO2np",
"RJAgRPulBsX",
"nips_2022_25XIE30VHZE",
"NTZ0iVfGk4",
"7lSIWWj2H-M",
"nips_2022_25XIE30VHZE",
"MuZjO20dQ9C",
"jN0HJno7gCx",
"MdU9DtLjATH",
"XO1-_jvcNhA",
"Gh6_OpSmxnR",
"GWEx3nQzeWj",
"nips_2022_25XIE30VHZE",
"nips_2022_25XIE30VHZE",
"nips_2022_25XIE30VHZE",
"nips_2022_25XIE30VHZE"
] |
nips_2022_Fytzfxj3Bq7 | Fixed-Distance Hamiltonian Monte Carlo | We propose a variation of the Hamiltonian Monte Carlo sampling (HMC) where the equations of motion are simulated for a fixed traversed distance rather than the conventional fixed simulation time. This new mechanism tends to generate proposals that have higher target probability values. The momentum distribution that is naturally joint with our Fixed-Distance HMC (FDHMC), and keeps the proposal acceptance probability close to 1, is not Gaussian and generates momentums that have a higher expected magnitude. This translates into a reduced correlation between the successive MCMC states and according to our experimental results, leads to an improvement in terms of the effective sample size per gradient when compared to the baseline HMC and No-U-Turn (NUTS) samplers.
| Accept | This paper proposes a new variant of Hamiltonian Monte Carlo. Rather than using a fixed number of iterations (as in the original HMC) or choosing the step-size adaptively (as in NUTS) the paper simulated the dynamics until a fixed *distance* has been traversed. The paper gives some arguments why this might be a good idea and a careful proof of detailed balance for thew new proposed algorithm. Reviewers agreed the algorithm seemed correct and the numerical results were compelling, but there were some existing concerns about the implementation of baseline algorithms and some questions about the technical details of the algorithm. Given the importance of HMC and the novelty and agreed correctness of this work, I recommend acceptance and urge the authors to consider the clarifying questions asked by the reviewers as opportunities for improving the paper. Also, additional evidence for experiments (e.g. perhaps comparing to another implementation of NUTS/HMC) would also be helpful. | train | [
"H_84q325puW",
"O0Wzu8N-I8u",
"wBjJi9rPVKj",
"YCmkDBA-Slf",
"oJZnkobyfuk",
"YSvVnPfeSTE",
"5C995JeVCh",
"vvdxHanOFhO",
"GJw-o8ItF6",
"agMZgsxcW86"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Once again, thanks for your suggestions and time spent on reviewing our paper.\nWhile we can still communicate, in case you have a particular numerical experiment in mind or could refer us to a related theoretical study that would potentially add to the value of the paper, we would appreciate if you could share it with us. ",
" I appreciate the authors' responses related to the questions. While I can see the materials provided have offered more evidence on the performance of FDHMC. these numerical evidences are not as convincing as a thorough and theoretical study of the algorithm. Therefore, my rating remain unchanged. ",
" We thank the reviewer for suggesting using an existing PPL.\nThe poor performance of our implementation of NUTS relative to the static HMC had been concerning for us too. But after spending a substantial time on debugging the code and finding no issue, we were confident that the results are correct (despite being unexpected). Mici's results have surprised us again. There is a possibility that the difference is only due to a more effective parameter tuning. However, as the reviewer suspected, it can also be due to an undiscovered bug in our implementation or tuning of NUTS. In any case, now we only report on Mici's implementations. Also, we updated the attached code with MICI's samplers. ",
" I thank the authors for the response, and for the added experiments and the updated manuscript.\n\nThe updated results with MICI make more sense to me (although I have to say that I find it quite concerning that the updated results differ so much from the previous results: many cases have 2x or even 5x improvements with the NUTS implementation from MICI). Please also update the rest of the manuscript to be consistent with the change (e.g. updated supplementary L111, attach updated code with MICI).\n\nI have updated my score accordingly.",
" We would really like to thank the reviewers for their comments. Addressing the raised issues has surely increased the quality of the paper. \n\n* * * * *\nReviewer d8Ux\n\nCOMMENT 1. On concerns about the implementation of NUTS:\n\nResponse: We thank the reviewer for highlighting this issue. As the reviewer suggested, now we use Mici (https://github.com/matt-graham/mici) implementation of NUTS. Mici is a PPL that can easily be combined with Python code. It provides two implementations of NUTS: (1) Dynamic Slice HMC (which is the original NUTS algorithm) and (2) Dynamic Multinomial HMC (which is a variation that is used in STAN since 2017 and is reported to outperform the original NUTS). We configure these samplers with Mici’s automated tuning. In average, these implementations perform better than our implementation of NUTS and their performance is more consistent with static HMC. \nAs such, now in the main text, we compare FDHMC versus Mici’s implementations of NUTS. This does not affect the comparative results and FDHMC still outperforms the baseline. (see the experimental results section in the updated main text).\n\n\nCOMMENT 2: Why is the total simulation duration in HMC chosen to be 2? What is 2 here, and why is this number independent of the model?\nResponse. \nWe are not aware of any systematic way to tune the simulation length (lambda=epsilon*L) of HMC (apart from choosing L dynamically as in NUTS). As such we chose this value by try and error and partly based on the quantitative analysis provided in the original NUTS paper. \nNote that we use this value as an upper limit. Because if the dual averaging (by which epsilon is tuned) does not converge, we keep halving lambda until the algorithm converges. This happens in some of our models. Therefore, the chosen simulation duration is not entirely independent of the model.\n\n\n\nCOMMENT 3: For FDHMC, there is some discussion on how to pick the fixed distance, but what about the step size? Is it picked the same way as HMC? What would happen if we apply some of the automated parameter tuning utilities available in commonly used PPL packages?\n\nResponse. Yes, the leapfrog step size of all algorithms that we have reported on (i.e. HMC, NUTS and FDHMC), is tuned by dual averaging. Dual averaging is by far the most popular way to tune the step-size of all HMC-based algorithms. Note that different samplers require slightly different variations of dual averaging (e.g. the objective function that NUTS’ dual averaging maximises is different from that of the static HMC). To make a dual averaging algorithm that is suitable for FDHMC we modified the HMC’s dual averaging (compare Algorithms 2 and 3 in the presented supplementary material with Algorithms 4 and 5 in the original NUTS paper). FDHMC’s dual averaging is a component of the fully automated FDHMC tuning which is already utilised in all experiments of the main text. \nWe could apply an existing automated parameter tuning to FDHMC, but we needed to have access to the source code and modify/customise it to work with the fixed-distance dynamics. \n\nCOMMENT 4. In the supplementary materials the authors present an algorithm to automatically tune the hyperparameters involved in FDHMC. Is there any reason why this algorithm is not used for the experiments in the main paper?\n\nResponse. This should have been a misunderstanding. What is presented in the supplementary material is the details of the hyper-parameter tuning algorithm that we have used in the experiments of the main paper. This includes the heuristic to choose the fixed-distance as well as a dual-averaging algorithm for tuning FDHMC’s step-size. \n\nCOMMENT 5. Study how sensitive the performance of FDHMC is to the choice of hyperparameters\n\nResponse:\nTo address this issue, we have added a new Quantitative Analysis section to the supplementary material. In this section we study the sensitivity of FDHMC to the fixed-distance parameter and show that the proposed automated parameter tuning performs reasonably well and chooses parameter values that are not far from the optimal range.\n\nCOMMENT 6. Demonstrate how the automated parameter tuning presented in the supplementary can further improve performance.\n\nResponse: \n(1) As mentioned, the experimental results of the main text already rely on this automated parameter tuning. Now we have modified the first paragraph of the experiments section (in the main text) to convey this more clearly and prevent confusions.\n(2) We have studied the performance of FDHMC’s automated tuning in the newly added quantitative analysis section in the (updated) supplementary material. \n\n \n\n\n\n",
" We would really like to thank the reviewers for their comments. Addressing the raised issues has surely increased the quality of the paper. \n\n\n* * * * *\nReviewer zous\n\nCOMMENT 1. How would one incorporate learning of the mass matrix into FDHMC? If the mass matrix is not unit does this pose any complications in terms of calculations of the remaining distance?\n\nResponse: We thank the reviewer for highlighting this point. For simplicity and clarity of the discussion, in this work we focused on combining the baseline HMC with the fixed-distance mechanism. However, this approach is vastly generalisable and combining Fixed-distance Leapfrog Mechanism with any variation of HMC that relies on Stormer–Verlet leapfrog integrator is straight forward (this is a very important point, as such we have now mentioned it in the last sentence of the revised conclusion). This includes the incorporation of mass matrices as well: If the position vector, $q$, is updated as: $q = q + u$ (where $u$ is an update vector e.g. $u= \\epsilon \\cdot M^{-1} \\cdot p$, with $M$ being a mass matrix), then the remaining distance, $D$, is simply updated as: $D = D – ||u||$. The physical interpretation is as follows: the traversed distance is the evolution-time multiplied by the magnitude of the velocity vector. In our paper, velocity and momentum were a same vector (since momentum = mass $\\times$ velocity) but if mass, in not unit, the velocity vector is the momentum vector divided by mass. That is, $M^{-1} \\cdot p$ if mass, $M$, is a matrix. \n\n \nCOMMENT 2. Doesn't NUTS address Limitation 2?\n\nResponse: No, it does not, because similar to the static HMC (and any existing variation of HMC that we are aware of), the distribution of NUTS’ momentum is Gaussian. Also, note that Limitations 1 & 2 can only be addressed together (since the opposite biases that they induce, nullify each other). \n\nCOMMENT 3. I would suggest to the authors that they pick some model with a complex multimodal posterior where each mode has roughly equal probability…\n\nResponse: We thank the reviewer for this suggestion. We have added a new section to the supplementary material where FDHMC and other samplers are run on a mixture of 4 Gaussian distributions with different dimensions. The sample plots shows that FDHMC’s transitions between the distribution modes occurs much more frequently. Given that the results are quite interesting and reveal the power of Fixed-distance mechanism, we plan to add a Mixture-of-multivariate-normal-densities model to the experiment of the main text, in the camera-ready submission. \n\nCOMMENT 4. Minor suggestions:\n\nResponse: Thanks for highlighting the confusing notation and the typo. They are fixed now. Unfortunately, the time limitation did not allow us to implement other suggested experiments, but we will consider reporting on hierarchical models with hold-out test data in the camera-ready version. \n\n",
" We would really like to thank the reviewers for their comments. Addressing the raised issues has surely increased the quality of the paper. \n\n* * * * *\nReviewer U811\n\nCOMMENT 1. Quantitative analysis of the algorithm is lacking. How sensitive is the performance with respect to the distance parameter?\n\nResponse: Thanks a lot for mentioning this! To address this issue, now we have added a new Quantitative Analysis section to the supplementary material. In this section, we have configured FDHMC with a range of fixed-distance parameters and plotted the ESS/grad versus the fixed-distance for all the experiments of the main text. (The python script that generates these results is added to the code in the supplementary material). \n\n\nCOMMENT 2. How does the [automatically tuned] fixed-distance change for each example? \n\nResponse: To address this question we have added another table in the mentioned new Quantitative Analysis section. In this table we report the average automatically tuned fixed-distance for each experiment in the main text. This value varies a lot for different models (from 0.8 to 6.5). Comparing this table with the sensitivity plots indicates that the proposed tuning mechanism is reasonably effective, and the tuned fixed-distance is not far from the peak of ESS/grad-vs-fixed-distance curve.\n(The python script that generates the entries of this table is added to the code in the supplementary material).\n\nCOMMENT 3. Argument for the statement of ‘the expected (rather than exact) step of the initial and final positions…’ at . (L151-L152).\n\nResponse: We thank the reviewer for highlighting this sentence. The precise statement would be to say: In FDHMC, the expected duration of the first position update is epsilon/2, because it is drawn from Unif(0, epsilon), and the duration of the last FDHMC position update is between 0 and epsilon (as otherwise, another full-step evolution would be added).\nComputing the expected value of the last position update is non-trivial and is not required for our discussion. Now we have edited/re-written the problematic paragraph in the revised version.\n",
" The authors proposed a new way of implementing the Hamiltonian Monte Carlo method. This new approach restricts the distance a single HMC step will travel, hence to avoid unnecessary oscillations in low probability region, as well as to reduce the correlations between samples. The proposed algorithm has been tested against several well-known examples for HMC, and compared with existing method. IT can be seen that the proposed algorithm outperform the pain HMC, as well as NUTS(no-U-turn-sampling). Strengths: The idea is innovative and effective in dealing with the limitations the author discussed. The algorithm is carefully designed to ensure reversibility while achieving fixed-distance traveling. The numerical experiments are representative and convincing.\n\nWeaknesses: Quantitative analysis of the algorithm is lacking. For example, it is curious to know how sensitive the performance of the FDHMC algorithm is with respect to the parameter of the fixed-distance, 1. Please provide the argument for the statement of \"the expected (rather than exact) step of the initial and final positions is at $i+\\epsilon/2$. (L151-L152).\n\n2. How does the fixed-distance change for each example, since it is determined by the procedure discussed in the first paragraph of Sec. 5. Also how sensitive is the performance with respect to the distance parameter?\n\n yes",
" The paper proposes a new Hamiltonian Monte Carlo algorithm for probabilistic inference in which each step of the proposal simulates the Hamiltonian dynamics for a fixed distance traversed in state space rather than a fixed number of steps. The paper motivates this alternate algorithm and provides the details of the algorithm and a proof of correctness. Empirical results comparing the new FDHMC algorithm with classical HMC and NUTS are also provided showing improved effective sample size per gradient estimation.\n\nThe motivation for the new FDHMC algorithm (paraphrasing the authors) is that the existing HMC algorithm traverses through high probability regions with very high momentum and thus doesn't spend enough time in these regions and when it does stop in the high probability regions it tends to get stuck there producing highly correlated samples.\n The proposed FDHMC algorithm certainly appears novel. [This reviewer has read and reviewed a large number of variants of HMC but not quite this one!]\n\nMost current work on improving HMC algorithms seem to be focused on improving the curvature of the state space which appears to be a critical challenge in probabilistic inference using any kind of gradient-based methods. This paper doesn't address the curvature issue. The proposed FDHMC algorithm assumes a unit mass for the momentum. All of the examples in the evaluation seem to be selected such that unit mass matrix would work well for them. Hence the the paper scores low on relevance.\n\nIn terms of quality and clarity the paper is very well written and the mathematical proofs and algorithms are very easy to follow. \n\nThe two main motivations of the paper Limitation 1 and 2 are presented somewhat informally. I am able to follow the main arguments that the authors are making, but still I would have preferred to see a more mathematical justification. At the very least the empirical evaluation should somehow directly tie in the higher ESS numbers to these limitations somehow. Otherwise it could well be that the higher ESS numbers are explained by something else entirely. For example, the tuning methodology of distance traversed could have a huge influence. (As we can see in the paper that the tuning of HMC seems to produce even better results than NUTS!)\n\nAlso, while the experimental results demonstrate higher ESS numbers they don't show that the samples are exploring the high probability region of the posterior space. It might be worth showing the unnormalized posterior probabilities of the samples. In the case of logistic regression it is also customary to show the log likelihood of held out test data using the posterior samples of $\\alpha$ and $\\beta$.\n How would one incorporate learning of the mass matrix into FDHMC? If the mass matrix is not unit does this pose any complications in terms of calculations of the remaining distance?\n\nDoesn't NUTS address Limitation 2?\n\nI would suggest to the authors that they pick some model with a complex multimodal posterior where each mode has roughly equal probability mass. If they can show that FDHMC explores all the modes better than HMC/NUTS then one could argue that it is addressing the limitations that are claimed to be addressed.\n\nAnother minor suggestion would be to include some results on a hierarchical model. These are customary to show in papers on probabilistic inference :) \n\nMinor Points:\n\n- I would suggest to use a different letter other than `p` for the superscript of `q` on lines 87, for example, and in related equations. `p` is used for momentum elsewhere and this is a bit confusing.\n\n- In equation 2 the Jacobian should refer to $\\cal{F}$ and I feel that the variables should be q,r and in the partial derivative rather than q and p.\n Not aware of any negative societal impact of this work.",
" The paper proposes fixed distance Hamiltonian Monte Carlo (FDHMC), a variation of HMC that simulates the equation of motion for a fixed distance rather than a fixed amount of time. The paper establishes the theoretical foundations of FDHMC, and demonstrates its advantages over HMC and NUTS with numerical experiments. ## Strengths\n\n- The idea is novel, and the theoretical foundations are clearly presented with excellent organization of the materials.\n- The proposed FDHMC algorithm makes intuitive sense, and can be easily implemented efficiently. This means FDHMC has the potential to make a big impact due to the widespread applications of HMC.\n\n## Weaknesses\n\n While the theoretical foundations are excellently presented, I have some concerns regarding the numerical evaluation. While the motivation for FDHMC makes sense intuitively, I would like to see more evidence from numerical experiments to confirm the advantages of FDHMC over HMC/NUTS:\n\n- In the experimental results in Section 5, vanilla HMC outperforms NUTS in many cases, sometimes by a very large margin. This seems contradictory to many of the existing results. The authors seem to indicate this is due to a simple tuning heuristic for HMC (L242-243) but I find this claim to be strange and surprising. Given the experiments are done using customized implementations and NUTS is quite a complicated algorithm, the results do not seem convincing to me. I would like to ask the authors to reimplement some of the experiments using readily available and more thoroughly tested probabilistic programming language (PPL) packages (e.g. NumPyro which has a NUTS implementation where all parameter tuning are automated and readily available tutorials on models like the Neal's funnel) and re-verify that HMC still outperforms NUTS.\n- The paragraph on parameter tuning is quite confusing. Why is the total simulation duration in HMC chosen to be 2 (L239-240)? What is 2 here, and why is this number independent of the model? For FDHMC, there is some discussion on how to pick the fixed distance, but what about the step size? Is it picked the same way as HMC? What would happen if we apply some of the automated parameter tuning utilities available in commonly used PPL packages?\n- In the supplementary materials the authors present an algorithm to automatically tune the hyperparameters involved in FDHMC. Is there any reason why this algorithm is not used for the experiments in the main paper?\n\nI would be happy to bump up my score if the above concerns regarding evaluation can be properly addressed, but in its current form I still have concerns over accepting the paper.\n See above The main limitation I see is the potential challenge in tuning the hyperparameters involved in FDHMC. The authors have made some discussions on how to address this limitation. But the discussions are not entirely convincing since the baseline results involving HMC and NUTS do not seem convincing. It would be ideal if the authors can:\n\n- Establish baseline results involving HMC and NUTS using more thoroughly tested PPL packages, using the provided parameter tuning scheme (e.g. for NUTS which is fully automated in most cases).\n- Study how sensitive the performance of FDHMC is to the choice of hyperparameters (e.g. if with some simple heuristics FDHMC can already outperform HMC and NUTS then the FDHMC performance is probably robust).\n- Demonstrate how the automated parameter tuning presented in the supplementary can further improve performance.\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"O0Wzu8N-I8u",
"vvdxHanOFhO",
"YCmkDBA-Slf",
"oJZnkobyfuk",
"agMZgsxcW86",
"GJw-o8ItF6",
"vvdxHanOFhO",
"nips_2022_Fytzfxj3Bq7",
"nips_2022_Fytzfxj3Bq7",
"nips_2022_Fytzfxj3Bq7"
] |
nips_2022_BbUxkmrstyk | An Investigation into Whitening Loss for Self-supervised Learning | A desirable objective in self-supervised learning (SSL) is to avoid feature collapse. Whitening loss guarantees collapse avoidance by minimizing the distance between embeddings of positive pairs under the conditioning that the embeddings from different views are whitened. In this paper, we propose a framework with an informative indicator to analyze whitening loss, which provides a clue to demystify several interesting phenomena as well as a pivoting point connecting to other SSL methods. We reveal that batch whitening (BW) based methods do not impose whitening constraints on the embedding, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse. Based on our analysis, we propose channel whitening with random group partition (CW-RGP), which exploits the advantages of BW-based methods in preventing collapse and avoids their disadvantages requiring large batch size. Experimental results on ImageNet classification and COCO object detection reveal that the proposed CW-RGP possesses a promising potential for learning good representations. The code is available at https://github.com/winci-ai/CW-RGP. | Accept | This paper studies the impact of whitening losses used in recent Self-supervised learning (SSL) methods. It shows that the symmetric whitening loss can be decomposed into two asymmetric losses, explaining important behaviour experimentally observed (*e.g.* why some whitening transformations -*e.g.* PCA- are not always effective, and why whitened output is not always a good representation). The authors proposed a channel whitening with random group partition (CW-RGP) with good experimental performances.
The paper initially received mixed reviews: 2 borderline reject and 2 accept recommendations. The main reviewers concerns related to the soundness of the proposed analysis of the whitening loss, the fairness of the experiments, and the positioning and comparison to other recent baselines (*e.g.* [47], DINO or SwAV). The rebuttal did a good job in answering the reviewers' comments, and there was a consensus that the paper should be accepted after the rebuttal.
The AC's owns reading of the submission confirmed that the paper is a solid contribution for NeurIPS. It considers that the proposed analysis of whitening loss is valuable for the community, and that the proposed CW-RGP is meaningful and experimentally validated. Thus, the AC recommends acceptance.
The reviewer's consensus for acceptance has been reached given the authors' feedback including promises to improve the clarity and insights of the analysis, and regarding the positioning with recent baselines and new experimental results. Therefore, the authors are highly encouraged to carefully include these improvements into the final paper.
| train | [
"ZOd4wpYBh_0",
"Zk7WxqeJ4c",
"wgP6FqqPi6H",
"F_1GL0xSTNT",
"hQRmBL1L1Q_",
"lu3oGRgA9Fl",
"x_GHY4_Czu-",
"TxJEg1BTej",
"VVaP_YTVg4j",
"9qN8891QCin",
"rSugjUExorR"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the rebuttal, actually I don't have a lot of hesitation to accept this paper with well writing and sufficient experiments. If only 4 GPUs is the truth, I hope in the future with released code, there can be future job to make it finished. I still hold my rating as 7. Accept.",
" Thank you for the rebuttal. It resolves some of my concerns (question 1,2, concern 3). I’d like to raise the score to 5. To me, oscillating signal across mini-batches seems to be a simpler/better explanation for the failures of some of the whitening losses (confirmed by the PCA experiments presented in the paper). The comparisons in Table 2 are also not fully convincing as the settings of the models are not the same: with a batch-size of 4096, BYOL and SWAV were trained with 8x fewer iterations than the proposed method. They can be further explored in future research.",
" Thanks for the detailed rebuttal. It adequately answers my questions. I think it will be good to include the relevant discussions in the next draft. \nI do not have any new concerns with this paper. ",
" We thank all the reviewers (j1ii, qKzP, h596) for their detailed and positive feedback: writing is clear and nicely done (j1ii, qKzP, h596), the proposed random group partition is technically sound (j1ii), understanding for whitening loss is convincing (qKzP), experiments are quite sufficient (qKzP), the paper will be of significance to the wider ML/SSL community (h596), obtains state-of-the-art results on standard benchmark datasets (h596).\n\nOur codes and trained models will be publicly released.",
" \n**Concern 3: Comparison with [47] (ICLR 2022) and additional ablation on the random group partition.**\n\n**Response:** We would like to highlight that the proposed instance-wise contrastive learning (ICL) in [47] is not the same as our mentioned CW only method (without RGP). There are several significant differences in technical details which we note by delving into the details of implementation (from the official codebase of [47]) and the rebuttal/discussion regarding [47] on OpenReview. \n\n1. ICL in [47] uses “stop-gradient” for the whitening matrix (whitening matrix is viewed as a set of parameters rather than a function over mini-batch data), which can be viewed as a naïve linear transformation, while our whitening matrix is viewed as a function that requires back-propagation through the whitening transformation. Similarly, in this way, Feature-wise Contrastive learning (FCL) in [47] is not BW used in W-MSE/Shuffled-DBN. \n\n2. ICL in [47] uses extra pre-conditioning $\\alpha I$ on the covariance matrix, i.e., $\\Sigma^{'} = (1-\\alpha)*\\Sigma + \\alpha I$, where $\\alpha = 0.1$. Furthermore, ICL also uses extra pre-conditioning $λI$ on the whitening matrix, i.e., $\\phi(Z) = U(\\Lambda + λI)^{-1/2}$, where $λ = 10^{-5}$. CW does not use extra pre-conditioning on the covariance matrix and whitening matrix. We experimentally observe that ICL suffers numerical instability when we remove the pre-conditioning (setting as zero or very small number), while our CW can work well without pre-conditioning since that the loss (Eqn.5) with back-propagation through the whitening transformation used in CW encourages the embedding $Z$ to be full-rank. \n\n3. ICL in [47] uses the invariance loss of Barlow Twins [44] which can prevent full collapse by encouraging the diagonal elements of covariance matrix to be $1$ (like BN), while CW/BW(W-MSE, Shuffled-DBN) use the common MSE or normalized MSE which only minimizes the distances between different views without extra constraints. We experimentally observe that ICL suffers severe (dimensional) collapse if we use MSE (or normalized MSE).\n\n We will clarify the differences of CW and ICL in [47] in the revised version. \n\n In terms of the comparison in performance, our CW-RGP can obtain better performance than Zero-CL in [47] (that combine ICL and FCL), e.g., on CIFAR-10, Zero-CL has 90.81% top-1 accuracy while our CW-RGP has 92.47% top-1 accuracy, when training for 1000 epochs using ResNet-18; on ImageNet, Zero-CL has 68.9% top-1 accuracy while our CW-RGP has 69.7% top-1 accuracy, when training for 100 epochs using ResNet-50. We would add the results of [47] in the revised version.\n\nIn terms of the ablation study for Random Group Partition (RGP), we conducted the preliminary comparison in Figure 6 of the paper. Here, we conducted additional experiments on CIFAR-10 & CIFAR-100 to further show the effectiveness of RGP, following the setting in Table 1. The results are reported in the following table, which clearly shows the advantages of RGP. \n\n\n\n| Method | CIFAR-10 | CIFAR-100 |\n| :----------: | :--------------------------------------: | :----------------------------------------------------: |\n| | top-1 5-nn | top-1 5-nn |\n| CW 2 | 91.66 88.99 | 66.26 56.36 |\n| CW-GP 2 | 91.61 88.89 | 66.17 56.53 |\n| **CW-RGP 2** | **91.92** **89.54** | **67.51** **57.35** |\n| | | |\n| CW 4 | 92.10 90.12 | 66.90 57.12 |\n| CW-GP 4 | 92.08 90.06 | 67.34 57.28 |\n| **CW-RGP 4** | **92.47** **90.74** | **68.26** **58.67** |\n\n\n\n**Minor issues:** We thank the reviewer and will fix all these in the revised manuscript.",
" We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific queries (questions and concerns) below.\n\n**Question 1: When evaluating the performance of the whitened representation (L167, Fig.3) I wonder if the whitening matrix $\\phi(Z)$ is estimated per mini-batch or over the whole training set?**\n\n**Response:** In Fig.3, the whitening matrix is calculated over the whole training set, when we evaluate the performance of the whitened representation. In this way, we ensure that the output is whitened with normalized stable rank=100%, as pointed out in Fig. 3. \nWe also perform experiments using the estimated whitening matrix using running averages over different min-batch statistics (the so-called evaluation mode in BN/BW). In this way, we also observe that the whitened output has worse performance than the encoding and embedding, with a significant margin. Note that using the estimated whitening matrix can also obtain a high normalized stable-rank (e.g, 99%) after the training is finished. \n\n**Question 2: L308-309, are “d=2” and “d=4” here referring to “g=2” and “g=4”?**\n\n**Response:** “d=2” and “d=4” is not referred to “g=2” and “g=4” here. “d=2” and “d=4” indicate that 2 and 4 positives views are extracted per image respectively, similar to W-MSE [12]. Considering that this may be confused with the size of channels $d$ in L 263-295, we will replace $d$ with $s$ in the revised version. Thank you for pointing it out. \n\n**Concern 1: Regarding The interpretations about “why PCA does not work” and “why the whitened output is not good”.**\n\n**Response:** We first thank the reviewer for the detailed comment. We hope that reviewer recognizes our analysis that the whitening loss in its symmetric formulation (Eqn. 5) can be decomposed into two asymmetric losses (Eqn. 6), where each asymmetric loss requires an online network to match a target with (whitened) constraints. From Eqn. 6, we can clearly find that the target $\\hat{Z}$ and the whitening transformation $\\phi(Z)$ are varying for each mini-batch. Therefore, Eqn. 5 implies the explanation from the reviewer that “an image may have different whitened representations when computed in different mini-batches.” And “the descriptor of an image relies on the eigenvectors ($U$ in L136), which may change dramatically across mini-batches”. However, we would like to point out that Eqn.6 (and the explanation suggested by the reviewer) is not sufficient to explain “why PCA does not work”. Since ZCA whitening has both varying target $\\hat{Z}$ and varying whitening transformation $\\phi(Z)$ (ZCA whitening also depends on the $U$) over different mini-batches, the question is why ZCA whitening can work? (if the suggested explanation is true). Apparently, we need to further compare the magnitude of diversity among different mini-batches. That is why we show Fig. 4, which shows that PCA whitening has significantly diverse targets and whitening transformations. Therefore, it is difficult for the online network to match such a target signal with significant variation, resulting in minimal decrease in the whitening loss, which explains why PCA whitening does not work well. We thank the reviewer and will further improve the text in the revised manuscript.\n\n\n**Concern 2: Regarding the comparisons in Table 2 (e.g. baselines like BYOL/SWAV).**\n\n**Response:** We thank the reviewer for the suggestion. Please note that the results of BYOL/SwAV in Table 2 is from the SimSiam paper [8], since our experiments are based on the codebase of SimSiam paper [8] and our workstation with 4 GPUs (RTX A6000) can not reproduce BYOL/SwAV due to the limitation of memory. We note that BYOL has degenerated performance when decreasing the batch size, as shown in Fig.3 of the BYOL paper [16] (e.g., BYOL with batch size of 256 has near 2.6% drop in top-1 accuracy, compare to the one with batch size of 4096). As recommended, we will further experiment by running BYOL/SWAV with the batch size of 512 and include the results in the revised manuscript. \n\n\n**Please find the response of the remaining comments (Concern 3 and \"Minor issues\") in the next comment box (2/2).**\n",
" We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific queries below.\n\n**Q1: Since one the Section 3.3 is ,..., supplementary material.**\n\n**A1:** Thanks for your suggestions. As recommended, we will provide more details, e.g., in illustrating how minimizing Eqn.7 only requires the embedding being full-rank, and proving how the symmetric formulation (Eqn.5) can be equivalently decomposed into two asymmetric losses (Eqn.6) in the revised version. \n\n**Q2: Can the centering+sharpening ,..., a similar analysis ?**\n\n**A2:** We believe “the centering+sharpening operation” used in DINO, and “equipartition constraint” used in SwAV can be looked at through a similar analysis (connected to our whitening loss). The key point of our analysis is that the whitening loss in its symmetric formulation (Eqn. 5) can be decomposed into two asymmetric losses (Eqn. 6), where each asymmetric loss requires an online network to match a target with (whitened) constraints. \n\n1. DINO explicitly formulates such a matching problem (online network to match target) from the view of knowledge distillation, and uses “centering+sharpening operation” to provides the constraints on the targets (e.g., it requires the targets to be centered, and requires that the scale of targets can be controlled by setting the temperature when using softmax). One significant difference between DINO and whitening loss is that DINO uses population statistics of centering (by moving average) while whitening loss uses the mini-batch statistics of whitening; \n2. SwAV can also be regarded as two asymmetric losses (the swapped prediction used in SwAV), and each asymmetric loss can also be viewed as an online network (including the model and the learnable prototype matrix) to match a target with constraints (e.g, the equipartition constraint shown in Eqn. 4 of SwAV paper and the high-entropy constraints shown in Eqn. 3 of SwAV). Note that SwAV also explicitly uses “stop gradient” when it calculates the target (code) by using the iterative Sinkhorn-Knopp algorithm shown in the supplementary materials A1 of SwAV paper. \n\nThanks for your insightful question. \n\n**Q3: L280: To ensure d/g >m, do you modify the last layer of the projection head ?**\n\n**A3:** Yes, we modify the last layer of the projection head to ensure $d/g>m$. The dimension of output of the last layer of the projection head is $d$ (i.e., the dimension of the embedding described in the experimental setup of the paper). Note that the results in Table 1 of Shuffled-DBN has the same dimension ($d=512$) as our CW-RGP. We also have experiments on W-MSE with $d=512$ (using group necessary to ensure $m>d/g$), we find that it has worse performance (W-MSE-2 has 90.36% top-1 accuracy). \n\n**Q4: training time ?**\n\n**A4:** Our experiment on ImageNet using the ResNet-50 is run on a workstation with only 4 GPUs (A6000 with 48G memory), and it takes roughly 96 hours (4 days) in training for 100 epochs. Even though whitening transformation has the time complexity of $O(dm^2+m^3)$, it has much less computation cost, compared to the computation cost of the backbone (encoder). E.g., on the experimental setup (ResNet-18, batch size of 256) for CIFAR-10, CW-RGP costs 23.09 s/epoch, while BYOL and Shuffled-DBN costs 25.21s/epoch and 24.78s/epoch, respectively. \n\n**Q5: Do you have some thoughts on why the gap in performance decreases at 100 ep vs 200 ep (Table 2) ? What about W-MSE 4 @ 200 ?**\n\n**A5:** In terms of the decrease of gap in performance at 100 epoch vs 200 epoch, we conjecture the reason is likely that CW-RGP has advantage in improving the efficiency of the learning for self-supervised learning, because it provides whitened targets, which provides non-redundant signal in learning representation. We observe this phenomena base on our preliminary experiment, by monitoring the training progress. This advantage is much more important in the early training. \n\nThe result of W-MSE 4@100 in Table 2 is reported from the W-MSE paper [12]. We donot reproduce W-MSE on ImageNet, because it requires a large batch size (bs=2048) and our workstation with 4GPUs does not have enough memory to run the experiments with such large a batch size. The experiments for reproducing W-MSE are currently in the pipeline and a through comparison will be included in the revised version. \n\n**Q6: Should report GPU memory in Table 2.**\n\n**A6:** Thanks sincerely for your constructive suggestions! We use a workstation with 4 NVIDIA RTX A6000(48G) for our method on ImageNet. Our method CW-RGP 4 required about 4*45G GPU memory when training. The other results shown in Table 2 are mostly reported in the SimSiam paper[8], except that the result of W-MSE 4 is from the W-MSE paper [12], while GPU memory & GPUs are not reported in these papers. ",
" We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific questions below.\n\n**Question 1: I would wonder whether the Barlow Twins method can be better. Then grouping the experiments based on their techniques is easier to show the contribution, especially this paper belongs to the whitening loss branch.**\n\n\n\n**Response:** Thanks sincerely for your constructive suggestions. \nWe note that the Barlow Twins paper does not provide the SSL trained results of Barlow Twins on CIFAR/STL-10/TinyImageNet. We thus conduct additional experiments for Barlow Twins, following the same setup in Table 1, and we use the default hyper-parameters and evaluation protocol as in W-MSE [12] (e.g., trained with Adam optimizer). We use lambda=0.0051 (recommend in Barlow Twins), provide three setups with different projector ({1024-1024, 2048-2048 and 2048-2048-2048}), we report the best results from the three setups. Barlow Twins has top-1 accuracy {87.51, 60.71, 84.55 and 41.49} and 5-nn accuracy {84.19, 54.95, 80.21, 31.84} on CIFAR-10, CIFAR-100, STL-10 and Tiny-ImageNet, respectively. We notice that by further finetuning the lambda hyper-parameter on CIFAR-10, the results of Barlow twins improve to 88.51% top-1 accuracy and 86.53% 5-nn accuracy. It is likely that we donot well reproduce Barlow Twins, since the recommended hyper-parameters of Barlow Twins is for the training of ImageNet. We would like to include them in the revised manuscript. As also mentioned by the reviewer, our manuscript mainly studies the whitening loss and presents the comparison with whitening loss based method (such as W-MSE).\n\n\n\n**Question 2: Finally, the comparison methods from Table 1,2,3 are a bit different, actually I was finding the W-MSE results in Table 3. Can you explain a bit?**\n\n**Response:** The experiments of Table 1 (CIFAR-10/100, STL-10 and TinyImageNet) are based on the public released code of W-MSE paper [12]. We reproduce the results of W-MSE and other baselines, which are also used in the W-MSE paper (except that we reproduce the following SimSiam and Shuffled-DBN method). However, the released code of W-MSE paper [12] does not provide the experimental setups for ImageNet. We thus use the released code of SimSiam paper [8], which is regarded as a good codebase for the experiments on ImageNet (Table 2). We further transfer the model trained on ImageNet to the object detection and instance segmentation tasks (Table 3). Therefore, the results of baselines on ImageNet (Table 2) and Transfer Learning (Table 3) are inherited from the SimSiam paper [8]. \nWe note that we report the result of W-MSE@100 epoch, which is from the W-MSE paper. We donot reproduce W-MSE on ImageNet, because it requires a large batch size (bs=2048) and our workstation with 4GPUs does not have enough memory to run the experiments with such large a batch size. Therefore, we could also not perform downstream tasks without the pretraining W-MSE model on ImageNet (W-MSE paper does not provide the pre-trained W-MSE model and also not show the results transferring to object detection tasks). The experiments for reproducing W-MSE are currently in the pipeline and a through comparison will be included in the revised version. ",
" The paper provides an analysis of different whitening losses used in Self-supervised learning, seeking to interpret some empirical observations, e.g. the connection between whitening losses and the asymmetric methods (BYOL/SimSiam), why PCA does not work, and why whitened outputs are not good representations. The paper also proposes channel whitening with random group partition (CW-RGP), which is shown to be an effective whitening loss. CW-RGP is evaluated on image recognition (ImageNet1k and 4 other benchmarks), object detection (VOC/COCO), and instance segmentation (COCO). Strengths\n\ni) the proposed random group partition is technically sound;\n\nii) the performance of the proposed CW-RGP method looks promising on some benchmarks e.g. COCO object detection/ instance segmentation;\n\niii) an ablation study on the batch size is provided;\n\niv) the writing is clear in general.\n\n\nWeaknesses\n\ni) The interpretations about “why PCA does not work” and “why the whitened output is not good” are not convincing. To me, the explanation could be much simpler and more intuitive: the batch whitened outputs rely on the batch statistics: an image may have different whitened representations when computed in different mini-batches.\n\nWhen using PCA, the descriptor of an image relies on the eigenvectors (U in L136), which may change dramatically across mini-batches. This explains why BW-based approaches prefer large batch sizes. It explains the experimental results shown in Fig. 4. It also explains why whitened outputs are not good representations, i.e. experiments in Fig. 3.\n\nNote that the predictors in the asymmetric methods (L197), on the other hand, do not rely on batch statistics, which I believe is a key difference.\n\nii) The comparisons in Table 2 may not show the full picture, e.g. baselines like BYOL/SWAV may be significantly under-trained. Here, the batch size for BYOL/SWAV is 4096. When trained for fewer epochs (e.g. 200 epochs), a large batch size may hurt the performance as it leads to significantly fewer training iterations. It would be better if baselines like BYOL/SWAV-batch-size-512-epoch-100/200 are also included.\n\n\niv) Channel whitening has been proposed before in [47] for the same task. As [47] has been published in ICLR 2022. I’m not sure if [47] could be considered a concurrent work. Compared to [47], the new content is the random group partition. This extra design may not be enough for NeurIPS. Overall, I believe [47] should at least be included as a baseline, and the ablation on the random group partition should be included.\n\nMinor issues\n\ni) L18-19 “two networks are trained …[8]”, I think there is only one network \n\nii) L286: “rand” → “random”\n\niii) Table 1, Simsim → SimSiam\n\niv) Table 1, references are included for some baselines (SimCLR, BYOL), but not all (e.g. Shuffled-DBN, W-MSE)\n\nv) Table 2 & 3, it would be better if references are included.\n i) When evaluating the performance of the whitened representation (L167, Fig.3) I wonder if the whitening matrix (\\Phi(Z)) is estimated per mini-batch or over the whole training set?\n\nii) L308-309, are “d=2” and “d=4” here referring to “g=2” and “g=4”?\n Limitations are discussed in the main paper. Potential negative societal impacts are not discussed.",
" This paper gives out a thorough investigation into whitening loss applied in self-supervised learning. Based on their analysis, they proposed a channel whitening method named CW-RGP. From their experiments on variuos datasets, CW-RGP gets the SOTA in most cases. Strengths:\n\n1, The writing is nicely done and the paper is organized well for understanding.\n\n2, The experiments are quite sufficient, including the analysis experiments and comparison experiments.\n\n3, The understanding for whitening loss is convincing and there are also some new indicators proposed for future study.\n\nWeakness:\n\nOverall, I didn't see much disadvantages in this paper, I'm just curious about several points below.\n\n1, The barlow twins and VICReg methods are not present in all tables. In the related work, the authors have classified them as soft whitening, so I think some comparison with them should not be neglected. \n\n2, Although, Barlow Twins might get even better results than CW-RGP (from my experiments before), I don't think it can be very important. This paper mainly studies the whitening loss, thus I think the comparison with whitening loss based method (such as W-MSE) is more important. I would wonder whether the Barlow Twins method can be better. \n\nThen grouping the experiments based on their techniques is easier to show the contribution, especially this paper belongs to the whitening loss branch. \n\nFinally, the comparison methods from Table 1,2,3 are a bit different, actually I was finding the W-MSE results in Table 3. Can you explain a bit? None",
" - Self-supervised learning approaches need to avoid collapse to a trivial representation. This has been tackled by various approaches in literature including use of negatives, use of asymmetric networks etc. Use of whitening loss has been explored by some recent works. \n- This paper studies this whitening loss and various variants of the whitening transformation used in practice. \n- The paper investigates some issues with the transformations used and a new random group partition based channel whitening approach to prevent collapse. The approach works well for large batch sizes which is shown through experiments on datasets like ImageNet and transfer to COCO. \n Strengths:\n- The paper seeks to analyze various approaches used for whitening of feature representations used for SSL. The paper is very well written. \n- The paper does a good job at explaining some of the preliminaries.\n- Previous approaches have been analyzed extensively through experiments based on public repositories. \n- The paper decomposes the whitening loss and connects it to common SSL approaches. \n- Obtains state-of-the-art results on standard benchmark datasets. \n- I think the paper will be of significance to the wider ML/SSL community. \n\nMinor Weaknesses, suggestions and questions :\n- Since one the Section 3.3 is one of the most important contributions of this paper, I would suggest having some more details & intuitions behind the explanations. Especially : L185-196. This could perhaps be included in the supplementary material. \n- Can the centering+sharpening operation used in DINO & equipartion constraint used in SwAV be looked at through a similar analysis ?\n- L280: To ensure $\\frac{d}{g} > m$, do you modify the last layer of the projection head ? \n- How does the proposed approach compare to state-of-the-art on training time ? \n- Do you have some thoughts on why the gap in performance decreases at 100 ep vs 200 ep (Table 2) ? What about W-MSE 4 @ 200 ?\n- I agree with the motivation in L263-266. I think from a practical standpoint, it is be a better idea to report required GPU memory & GPUs per approach in Table 2 in addition to the batch size. \n Please refer to the section on \"Strengths And Weaknesses\" for questions and comments. - The authors have adequately discussed limitations of the proposed approach. \n- While the work is more at a fundamental level, I do urge the authors to include some discussion on potential negative impacts. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"TxJEg1BTej",
"hQRmBL1L1Q_",
"x_GHY4_Czu-",
"nips_2022_BbUxkmrstyk",
"lu3oGRgA9Fl",
"VVaP_YTVg4j",
"rSugjUExorR",
"9qN8891QCin",
"nips_2022_BbUxkmrstyk",
"nips_2022_BbUxkmrstyk",
"nips_2022_BbUxkmrstyk"
] |
nips_2022_q16HXpXtjJn | Beyond the Best: Distribution Functional Estimation in Infinite-Armed Bandits | In the infinite-armed bandit problem, each arm's average reward is sampled from an unknown distribution, and each arm can be sampled further to obtain noisy estimates of the average reward of that arm. Prior work focuses on the best arm, i.e. estimating the maximum of the average reward distribution. We consider a general class of distribution functionals beyond the maximum and obtain optimal sample complexities in both offline and online settings. We show that online estimation, where the learner can sequentially choose whether to sample a new or existing arm, offers no advantage over the offline setting for estimating the mean functional, but significantly reduces the sample complexity for other functionals such as the median, maximum, and trimmed mean. We propose unified meta algorithms for the online and offline settings and derive matching lower bounds using different Wasserstein distances. For the special case of median estimation, we identify a curious thresholding phenomenon on the indistinguishability between Gaussian convolutions with respect to the noise level, which may be of independent interest. | Accept | This paper studies offline and online statistical estimation in the infinite-armed bandit setting and gives a set of almost tight upper and lower bounds on the sample complexity. Initially, some reviewers raised concerns about the motivation of general functional estimation and the comparison with existing statistical estimation literature. The authors have made efforts on addressing these issues; their arguments seem reasonable. | train | [
"Ogzh9cfnOHs",
"86XqYg89zpF",
"lLb9Q-m1GlN",
"GTYfpTwAMH8-",
"KKaP9Sl7USj",
"DV6Km_RYIWH",
"byEiXHS38DR",
"4IxRoxDD8KdM",
"mG0cG704W3",
"F3TAi0UoVC",
"3nH9STRkPrS",
"DWH1ydaTvTn",
"uZ-fNyiPED"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for your detailed comments and modification of your draft.\nI understand your use of the Doubling trick and its relevance to the related research. Although I still believe the work shouldn't be reviewed solely from the bandit's perspective, I acknowledge the contribution. I raised the score to 5.\n\nBest, \n\nReviewer 246j",
" Dear Reviewer G045,\n\nWe thank you for your valuable time spent reviewing our work, and we really hope to have a further discussion with you to see if our response resolved your concerns. Within the rebuttal period, we have made the following improvements based on your suggestions, which have been integrated into the latest version:\n\n- We add further motivation in the introduction.\n- We elaborate on the algorithmic novelty and theoretical novelty in the lower bound analysis.\n- We better place our work in the context of the existing statistical estimation literature.\n- We discuss the future directions of the extension of our analysis.\n\nWe would appreciate it if you could kindly share your thoughts on the key points in our response, and keep the discussion rolling in case you have further comments. Thank you!\n\nBest wishes,\n\nAuthor",
" Dear Reviewer 246j,\n\nWe thank you for your valuable time spent reviewing our work, and we really hope to have a further discussion with you to see if our response resolved your concerns. Within the rebuttal period, we have made the following improvements based on your suggestions, which have been integrated into the latest version:\n\n- We add further motivation in the introduction.\n- We better place our work in the context of the existing statistical estimation literature.\n\nWe would appreciate it if you could kindly share your thoughts on the key points in our response, and keep the discussion rolling in case you have further comments. Thank you!\n\nBest wishes,\n\nAuthors",
" We thank the reviewer for their comments and feedback. We have striven to incorporate their feedback, and hope that the below clarifications address your concerns.\n\n__{Motivation of functional estimation:__ we thank the reviewer for pointing this out. We have added further motivation in the introduction, including a practical distributed learning application (see general response for additional discussion). We have also added additional motivation regarding the offline setting, as while adaptivity is incredibly powerful it is not always feasible, due to the length of time required to obtain samples as in biological experiments or the communication overhead in distributed learning. \n\n__Algorithmic novelty:__ see general response.\n\n__Theorem 2:__ Theorem 2 details our algorithm's performance in the online setting. We present the corollaries of Theorem 2 in appendix D.3, and provide worst-case bounds for Theorem 2 in Table 1, showing that median and trimmed mean estimation is improved by a factor of $\\epsilon^{-.5}$ by our online algorithm as opposed to the offline baseline, for example.\n\n__Tightness of bounds (distance squared term):__ the lower bounds we present in this work are worst-case, not instance dependent. That is to say, from a lower bound perspective, we show that there exist 2 hard distributions that any adaptive algorithm must draw many samples from in order to provide $(\\epsilon,.1)$-PAC estimation. We provide an instance-dependent upper bound for our algorithm in Theorem 2, and show in Appendix D.3 that for any distribution satisfying the functional-specific assumptions in Section 3.1, the sample complexity of the online algorithm will be no worse than the minimax lower bound (up to terms suppressed by $\\tilde{\\Theta}$). \n\n__Assumptions in Section 3.1:__ In order to exploit the Bayesian nature of the problem, these assumptions are necessary. Concretely, in the maximum estimation setting, an upper bound of $\\beta$ is required in order to provide any theoretical guarantees. Adapting to $\\beta$ (e.g. if the provided upper bound is much larger than the true $\\beta$) is known to be difficult, and requires significant problem specific work [Simple regret for infinitely many armed bandits, 2015]. Thus, in this first work tackling general functional estimation in the bandit setting, it does not seem to be helpful to discuss this adaptivity, and upper bounds of these quantities should be used. Note that our algorithm doesn't require exact knowledge of these distributional parameters, and simply requires bounds on these quantities (e.g. Lipschitz continuity around quantiles).\n\n__Optimality of the algorithms:__ Our algorithm sample complexities are detailed in Table 1, showing that our algorithms are minimax optimal up to lower order terms suppressed by $\\tilde{\\Theta}$. Our upper bounds for the offline setting are detailed in Theorem 1, and in the online setting are described in Theorem 2. Corollary 2.2 gives the lower bound for mean estimation, Corollary 2.3 gives the lower bound for maximum estimation, Corollary 2.4 gives the lower bound for offline median estimation, and Theorem 3 gives the lower bound for online median estimation. Corollary 3.1 and Theorem 4 provide the lower bound for trimmed mean estimation.",
" We thank the reviewer for their careful reading and positive feedback. We have worked to clarify related statistical works, and have incorporated additional discussion regarding the error probability $\\delta$.\n\n__Related work in statistics:__ as the reviewer points out, there are indeed relevant statistical works. However, these works have only considered the offline setting, as the observation model motivating the online setting is novel and specific to the multi-armed bandit setting. We now include these references and additional background in the main text, see the general response for further discussion.\n\n__Dependence on $\\delta$:__ indeed, the results in Table 1 depend on $\\delta$. Since the focus of this work was on characterizing the dependence on $\\epsilon$, $\\delta$ was taken to be a constant for this table (see lower bounds, $\\delta$ taken to be $.1$). Currently, our approaches have differing dependencies on $\\delta$ due to the different assumptions and analysis techniques. Detailed dependence on $\\delta$ can be found in Proposition 1-4 for the offline algorithm and Theorem 2 for the online algorithm. For instance, mean estimation uses $n\\propto 1/\\delta$ from Prop 1, as the Chebyshev inequality is used. By further assuming that the underlying distribution is sub-Gaussian, we can improve this dependence to $n\\propto \\log(1/\\delta)$. Maximum estimation in the offline setting uses $n,m\\propto \\log(1/\\delta)$, requiring a total sample complexity scaling with $\\log^2(1/\\delta)$. Observe from Theorem 2 that our online algorithm will only increase the expected sample complexity by at most a $\\log(1/\\delta)$ factor as compared to the offline method (however, the online sample complexity can be no worse than the offline one). Considering the simultaneous limiting behavior as $\\epsilon,\\delta \\to 0$ is an interesting direction of future work.",
" We thank the reviewer for their careful reading. We have worked to better place our work in the existing literature by adding a paragraph comparing our work with existing statistical literature. As we describe, the offline setting has been previously considered (non-adaptive), but the adaptivity inherent to the online infinite-armed bandit setting has not been studied.\n\n__Doubling trick:__ the choice of per-round arm budgets is a minor component in our algorithm, but we note that the \"doubling trick\" normally refers to regret-based algorithms, where a fixed-budget algorithm which attains $O(\\log T)$ regret for a fixed horizon $T$ is converted to an algorithm with anytime performance guarantees by restarting it from scratch in epochs of progressively doubling length. The reviewer is correct that such methods often perform poorly in practice due to the discarding of all prior information in each new epoch. Our algorithm does not use this approach as our objective is identification, not regret minimization. While our number of arm pulls does indeed increase geometrically across rounds, this form of doubling has been shown to empirically yield good performance ([Almost Optimal Exploration in Multi-Armed Bandits, 2013], [Distributed Exploration in Multi-Armed Bandits 2013]). This scaling of arm pulls is just for convenience, trading off between rounds of adaptivity and number of arm pulls required, and any increasing sequence can be used (e.g. one pull per round). \n\n__Relation with existing statistical literature:__ we thank the reviewer for highlighting this important point. We have sought to better place our work in the context of the existing statistical estimation literature, and have expanded upon this point in the general response.",
" We thank the reviewer for their careful reading of the paper. We have added text to the manuscript to describe the novelty and scope of the setting in greater detail, and have clarified the motivation of decoupling the observation model and the statistical objective.\n\n__Motivation:__ we have added additional background and motivation to the introduction, as detailed in the general response.\n\n__Clarity:__ thanks to the reviewer's comments we have worked to further clarify the exposition by expanding on the definitions and formulations prior to the results.\n\n__Novelty:__ while the offline algorithm does not allow for many algorithmic degrees of freedom, the analysis required is novel, especially for the accompanying lower bound. Further, while the online setting may appear similar to existing bandit works, the key algorithmic aspect of leveraging information across arms to exploit the underlying Bayesian structure is novel. See the general response for further points.\n\n__Applicability to classical bandit setting:__ We think that these ideas are applicable only for the infinite-armed scenario, and not the classical setting. This highlights the novelty in our setting; due to the common underlying arm reward distribution, in order to estimate the median of the arm-reward distribution, we do not need to identify the median arm. This ability to average over arms is unique to the Bayesian nature of the infinite-armed bandit problem. \n\n__Limitations:__ Interesting directions of future work include extending our analysis to non-indicator-based functionals, such as the BH threshold. We have added this to the conclusion in the revised manuscript. ",
" __Novelty:__\nWhile our proposed algorithms share surface-level similarity with existing algorithms for the classical finite-arm bandit problem, our algorithm has significant algorithmic and analytical differences from classical bandits. From an algorithmic perspective, we need to carefully prioritize arms based on their confidence intervals in order to determine which arms may still impact the quantity of interest, and then leverage the Bayesian nature of the problem by averaging those arms determined to have means within the range of interest.\nFrom an analytical perspective, we cannot directly show that our online algorithm achieves $(\\epsilon,\\delta)$-PAC estimation, but rather need to first show that our online algorithm provides $(\\epsilon/2,\\delta/2)$-PAC estimation of the offline algorithm, which we then show achieves $(\\epsilon/2,\\delta/2)$-PAC estimation of the desired functional. This first step shares similarities with Frequentist analyses, and the second with Bayesian analyses and functional estimation ones in statistics, but both require nontrivial modifications for this specific setting. Observe that in standard Frequentist settings, one cannot average the estimated arm-mean across different arms (as arm means are arbitrary), and for maximum estimation in the Bayesian setting averaging is not beneficial. Studying functionals beyond the maximum in the Bayesian setting necessitates this more sophisticated algorithmic structure which we propose.\n\nIn addition to the novel techniques required for the algorithm design and analysis, the lower bounds provided in this work are novel and revealing. Even the lower bounds we provide in the offline case differ significantly from the usual problems considered in the statistics literature. Further, we utilize the Wasserstein distance to prove lower bounds for both offline and online algorithms, and discover a thresholding phenomenon of the Wasserstein distance with respect to the noise variance arising in median estimation, which may be of independent interest.",
" __Motivation:__ \nIn this work, we showed how the infinite-armed bandit setting can be utilized to study more general functionals beyond the maximum, decoupling the observation model and the objective. \n\nThe infinite-armed bandit observation model has received much attention due to its applicability in large-scale settings [Bandit problems with infinitely many arms, 1997]. Objectives beyond the maximum are commonly of interest; e.g. conditional value at risk, quantiles, and beyond [PAC identification of a bandit arm relative to a reward quantile, 2017] [Quantile-Regret Minimisation in Infinitely Many-Armed Bandits, 2018].\n\nIn this work, we show that this observation model can be extended beyond just the case of maximum estimation, and construct a general algorithm with fundamental theoretical guarantees that can be applied to many settings. One such natural setting is in large-scale distributed learning. Here, a server / platform wants to estimate how much test-users like their newly released product. Users return a noisy realization of their affinity for the product, and the platform can decide to pay the user further to spend more time with the product, to provide additional testing.\n\nFor many natural objectives which are robust to a small fraction of adversarial users, e.g. trimmed mean, median, or quantile estimation, we see that our algorithm will enable estimation of the desired quantity to high accuracy while minimizing the total cost (number of samples taken). We include the following sentences discussing the motivation of learning general functionals in the revised manuscript.\n\nIn [Adaptive Monte Carlo Multiple Testing via Multi-Armed Bandits, 2019], estimating the Benjamini Hochberg (BH) threshold arises in the field of multiple hypothesis testing with applications in statistical inference. The estimation of the median/quantile is similar to estimation of the BH threshold, as both depend on the order statistics of the underlying distribution. Estimating the median or trimmed mean has important applications in robust statistics, for instance, maintaining the fidelity of the estimator in the presence of outliers. We also emphasize that the median estimation in our setting is different from the usual median estimation setting. This is because each sample is noisy but in the statistical literature we have clean samples whenever we draw one sample from the underlying distribution. \n\n__Relation with existing literature:__\nFollowing the reviewers' helpful comments, we have added the following discussion to the manuscript to better position our work.\n \nFrom a statistical perspective, the sample complexity in the offline setting is closely related to deconvolution distribution estimation [Deconvolution of a distribution function. 1997] [All of statistics: a concise course in statistical inference. 2004] [Estimation of distributions, moments and quantiles in deconvolution problems. 2008] [On deconvolution with repeated measurements. 2008] [On deconvolution of distribution functions, 2011]. We cite these references and elaborate on the below comparison in the revision. Nevertheless, these previous works mainly focus on the expected L2 difference between the underlying distribution function and its estimation. This simplified setting does not allow for consideration of the trade-off inherent in our setting between the number of points and the (variable) number of observations per point. Additionally, these past works did not calculate the specific sample complexity for functionals like median and quantile. Since the noise is treated as fixed and uniform, there has been no study of the online setting where adaptive resampling enables dramatic sample complexity improvements. In particular, the challenge is that we have noisy observations, which makes the lower bound even in offline cases a significant challenge that has not been dealt with in the past, let alone the online case.",
" The paper studies the problem of functional estimation under the infinite-armed bandit setting (Berry et al., 1997). Specifically, the authors considered online and offline algorithms for estimating multiple functionals, such as mean, median, maximum, and trimmed mean. The main contribution is determining the sample complexity bounds for these estimation tasks. One strength is that the sample complexity bounds are tight, up to logarithmic factors in the error parameters.\n\nAnother strength is that the authors have unified formulations and algorithms for both settings, showing that the online and offline sample complexities can differ for some functions. One can find more details in Table 1. \n\nA weakness, in my opinion, is that the paper does not provide much motivation for learning functionals (other than the mean) under the infinite-armed bandit setting. I found one sentence on page 1 saying that \"in many scenarios, including single-cell RNA-sequencing (Zhang et al., 2020)\", but it doesn't seem that a related experiment appeared in the paper. \n\nAnother significant weakness is the writing, especially the clarity. I suggest the authors introduce the critical definitions and formulations before explaining the results. In the paper, the authors seem to assume that the readers are already familiar with the related literature. \n\nOne more weakness is the novelty. The offline algorithm is an averaging algorithm with standard concentration attributes. And the online algorithm essentially refines the rough ordering of elements by constructing finer and finer confidence intervals on the promising ones. I don't think these ideas are novel, or maybe they are relatively standard. Please refer to the strengths and weaknesses section for suggestions or points to address.\n\nA question here is whether the authors think some of these ideas apply to the classical bandit settings, and why. I think the settings and results are reasonable. But I didn't find a specific section or paragraph discussing the limitations of the current work or future directions. The authors may consider adding such a section since there's still space.",
" The paper considered distribution functional estimation problems in the infinite armed bandit problem.\nFirst, the authors formulated the distribution functional estimation problems (especially mean, median, maximum, trimmed mean estimations). \nThe authors presented meta-algorithms for such distribution functional estimation problems for both online and offline settings together with the sample complexity guarantees.\nNext, lower bounds on the sample complexities for both offline and online settings are shown. \nFor the lower bound for the median estimation problem, the authors remarked on \"thresholding phenomena\" in order to prove tighter lower bounds. \n - Strengths\n\nA relatively thorough upper/lower bound characterization of the distribution functional estimation problem has been made. (for both offline and online settings)\n\n\n- Weaknesses\n\nThe online algorithm is based on the doubling trick so it may be unpractical. (although theoretically sound)\nNo numerical experiments have been made.\nPositioning of the work (see Limitations section).\n\n Please see the Limitations section. \nMy main concern is that the research associated with this paper is not well surveyed.\nAs the main objective is different from the papers cited in the related work, I don't think the paper deals with the problems that could only be cast as bandit problems. \nThe problem considered in this paper (estimating the distribution functional estimation in online/offline setting) and the problem considered in the related work (minimizing regret or \nsimple regret) have a largely different nature. \n\nIt looks like there are many papers in statistics that deals with the problem of estimating the statistical functionals. for example,\n\nWasserman, Larry. All of statistics: a concise course in statistical inference. Vol. 26. New York: Springer, 2004.\n\nHall, Peter, and Soumendra N. Lahiri. \"Estimation of distributions, moments and quantiles in deconvolution problems.\" The Annals of Statistics 36.5 (2008): 2110-2134.\n\n(Note these areas are not my expertise.)\n\nWhile there may be a novelty, I believe that appropriate comparisons with papers dealing with similar issues need to be made.\n\n\n\n\n",
" The paper studies distribution functional estimation (estimating mean, median, maximum, trimmed mean) in the infinite-armed bandit problem. \n\nFor the offline setting, the authors design the appropriate number of points $n$ and the number of samples per point $m$ to minimize the sample complexity $m*n$. \n\nFor the online setting, they propose an elimination-based algorithm that can adaptively screen out those redundant points, which are not related to the functional estimation. Thus, the online algorithm can reduce the sample complexity (in expectation). However, the online algorithm shares the same worst-case sample complexity as the offline algorithm.\n\nThey also provide matching lower bounds for the sample complexity under online/offline settings. For the mean and maximum estimation, the proof uses Wasserstein distances to upper bound the KL divergence. For the median estimation, the Wasserstein distance-based method is too loose, so they propose a proof and show a ``thresholding phenomenon’’, KL divergence does not change smoothly with the noise level, which may be of independent interest.\n Originality: It’s a novel work. The paper focuses on the sample complexity and reveals the differences between various functions and offline/online settings. My only concern is that the work seems to be more related to statistics, but the paper doesn't review papers in that area. I think a discussion of the connection to papers in statistics can help the position of the paper a lot.\n\nQuality: It’s technically sound.\n\nClarity: The paper is written clearly and well organized.\n\nSignificance: The paper considers the sample complexity for different functionals and online/offline settings. It provides matching upper bound and lower bound for every scenario. The result (which is summarized in Table 1) is novel, complete, and solid. \n\nThe paper shows interesting insights which can shed some light to future work. First, the work reveals the difficulty for various functions. For example, estimating the median is harder than the mean, but not harder than the trimmed mean. Second, the work reveals the difference between offline and online settings. For mean estimation, the online setting has no advantage. But online algorithm has improved over offline for other functionals.\n Note that the sample complexity depends on the target accuracy $\\epsilon$ also the confidence level $\\delta$. Why not consider the dependence on $\\delta$ in Table 1? Is that because all the terms have the same growth rate $O(log(1/\\delta))$?\n\nSince the objective in this paper is no longer minimizing regret or identifying the best arm as in the classical MAB setting, could the author further list the related work in median/quantile estimation? For example, the result for mean estimation is a simple extension of concentration/Chebyshev inequality. For the online setting, there’s no need to discard points, since every point should contribute to the mean. NA",
" The authors study the infinite-armed bandit problem. Prior work focused on best arm identification in this setting; the authors instead consider estimating a functional of the underlying distribution F. They examine indicator-based functionals such as the mean, median, maximum, and trimmed mean. They study both the offline and online settings. They give a meta-algorithm for arbitrary indicator-based functionals with a general sample complexity and also provide lower bound results for each setting. Strengths:\n\nIt is nice that they have a unifying algorithm and sample complexity result for all settings. \n\nThey provide lower bounds for all of the settings.\n\nThe problem formulation is mathematically elegant and it is interesting to consider indicator-based functionals.\n\nWeaknesses:\n\nThe motivation for estimating a general functional seems weak to me. The authors mention single-cell RNA sequencing, but do not explain to the reader in detail how estimating the median or trimmed mean arise in these practical settings. I think it is particularly important to motivate estimating the median and trimmed mean because these seems like the most novel settings of this paper. In what real-world problems would we apply these algorithms? I also think the offline setting should be motivated here.\n\nI also think the paper would benefit from experimentally validating the results. How much of a gap is there between the online and offline algorithms in practice? \n\nThe algorithmic novelty is limited. While it is nice that the algorithm unify all settings, it uses standard ideas from the literature, applied to the indicator-based functional. It basically adaptively identifies which drawn arms belong to the set defined by the indicator-based functional using confidence bounds and order statistics. \n\nIt would be nice to give some concrete examples where Theorem 2 gives a much better sample complexity than is possible in the online setting. \n\nI think that there could be a clearer presentation relating the upper bounds and the lower bounds. The authors argue that the bounds are tight, but for the online setting upper bounds contain a distance^2 term that does not appear in the lower bounds. So, it is difficult to evaluate the gap. \n\nThe algorithm seems to require knowing the quantities from the assumptions in section 3.1. These do not seem like they would typically be known in practice. It would be useful and important to give guidance to practitioners on how to set these quantities in practice. \n Could the authors comment on the optimality of the algorithms in all of the settings in more detail? \n\nCould the authors elaborate on the motivation more? As of now, the applications are unclear to me.\n\nPost rebuttal: the authors have addressed my questions on optimality, theoretical comparison between the offline and online settings, and the motivation. Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"lLb9Q-m1GlN",
"byEiXHS38DR",
"DV6Km_RYIWH",
"uZ-fNyiPED",
"DWH1ydaTvTn",
"3nH9STRkPrS",
"F3TAi0UoVC",
"mG0cG704W3",
"nips_2022_q16HXpXtjJn",
"nips_2022_q16HXpXtjJn",
"nips_2022_q16HXpXtjJn",
"nips_2022_q16HXpXtjJn",
"nips_2022_q16HXpXtjJn"
] |
nips_2022_4-bV1bi74M | 🏘️ ProcTHOR: Large-Scale Embodied AI Using Procedural Generation | Massive datasets and high-capacity models have driven many recent advancements in computer vision and natural language understanding. This work presents a platform to enable similar success stories in Embodied AI. We propose ProcTHOR, a framework for procedural generation of Embodied AI environments. ProcTHOR enables us to sample arbitrarily large datasets of diverse, interactive, customizable, and performant virtual environments to train and evaluate embodied agents across navigation, interaction, and manipulation tasks. We demonstrate the power and potential of ProcTHOR via a sample of 10,000 generated houses and a simple neural model. Models trained using only RGB images on ProcTHOR, with no explicit mapping and no human task supervision produce state-of-the-art results across 6 embodied AI benchmarks for navigation, rearrangement, and arm manipulation, including the presently running Habitat 2022, AI2-THOR Rearrangement 2022, and RoboTHOR challenges. We also demonstrate strong 0-shot results on these benchmarks, via pre-training on ProcTHOR with no fine-tuning on the downstream benchmark, often beating previous state-of-the-art systems that access the downstream training data. | Accept | *Summary*
The paper presents ProcThor, a framework to generate interactive 3D environments from an underlying distribution of room and object layouts. In the current work, 10000 3D environments of varying sizes, # rooms, and object distributions are sampled and enable simulation for object search and manipulation tasks. The experiments demonstrate that training policies on the synthetically generated environments and the finetuning it on other datasets like RoboThor, AI2Thor, and Habitat-Matterport3D lead to state-of-the-art performances. Furthermore, the ablation experiments reveal that the transfer task performance continues to improve as more and more virtual 3D environments are sampled for training. Finally, 'zero-shot' experiments (without finetuning) are included with surprisingly good results.
*Reviews*
The reviewers' ratings are 4 (borderline reject), 8 (strong accept) and 9 (very strong accept).
Reviewer L9s9 (borderline reject) did not respond to the rebuttal, but their main concerns were:
- (W1) the paper should be submitted to the datasets track
- (W2) the paper should include experiments with RGB-D sensors, and
- (W3) the paper should include experiments with other procedural generation schemes.
*Decision*
I have already discussed and dismissed concern W1 in my comments below. Regarding W2 and W3, these are asks for more experiments. The authors provided reasonable justifications and responses to these concerns. Further, I'm persuaded by the other reviewers that the paper already represents a 'massive feat of engineering' and note that the experiments already cover both zero-shot and finetuning settings on multiple leaderboards. Therefore I am putting less weight on L9s9's low rating.
AqAB and QxMz are united that this paper and the associated code release will have a significant impact on the Embodied AI community and I agree. The paper is also acknowledged as being extremely well-written, with very thorough experiments, and I feel it could be considered for an outstanding paper award.
| train | [
"C5tBcHGGlq4",
"oU8JAC-4E8w",
"MmIkCK1y5Ib",
"0lqBe2AFFTZ",
"VmKms6fvNIwO",
"Uq2dKEhlCyi",
"FVA96BnGFoj",
"HYArW8bFU1WE",
"O_lQ55QpsQk",
"Vsi1qwnDNKb",
"iRB6vYvIGNGt",
"lYPczfQCw1D",
"iG-XnnsV3kB",
"quTNP5JCuUh",
"XgNJZM4m4y",
"hPkFpUvkv-M",
"arB0LJ1_MrB",
"JXpjVKrsxCj",
"g8VaCGU4CH8",
"xPnzCq3VBK7",
"uMqyNVJymbU",
"N050GQ-RfCu",
"0nsuTMch1lE",
"NihVzPgZSSY"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their thoughtful feedback and support of the work.\n\n> I'd recommend adding the response to W.5 into the paper to make it slightly stronger.\n\nThank you for the suggestion. We agree and have added the results to Section H of the Appendix.",
" We thank the reviewer for their detailed response and positive feedback.\n\nBased on your comments, we are looking into interpretability studies to better understand what is being learnt from our trained agents in HM3D, and running more fine-tuning experiments in HM3D under different ablations. Unfortunately, we do not have enough time to draw meaningful conclusions from these experiments yet, but these questions are being actively pursued by us.\n\n> For the final version of the paper, it would be nice to have the navigation + interaction simulation speed comparison with Habitat-2.0 / iGibson.\n\nThank you for the thoughtful suggestion. We will do our best to provide a meaningful comparison with the additional platforms in the final paper.",
" \nThanks folks for the robust discussion.\n\nRegarding the question of main track vs. the datasets and benchmarks track. I refer to the datasets and benchmarks track [call for papers](https://nips.cc/Conferences/2022/CallForDatasetsBenchmarks) which says \"It is also still possible to submit datasets and benchmarks to the main conference (under the usual review process)\". Therefore it is the authors' choice which track to submit to and this paper will be considered for the main track. \n\nBest,\nAC\n",
" I appreciate the commitment of the authors and the promise in combination with the anonymous code convinces me that code will be released.",
" As mentioned in my response to the R#AqAB below (https://openreview.net/forum?id=4-bV1bi74M¬eId=Uq2dKEhlCyi), I still hold that the main contribution of this work is a dataset and engineering efforts and as such, the paper should go into the benchmarks & dataset track.\n\nBut the paper and work itself is of very high quality and should definitely be accepted **somewhere at NeurIPS**, so I'll change my review to acceptance and let the AC comment on the track.",
" Dear R#AqAB,\n\n> the engineering effort of putting together the best methods from this vast scene generation literature and coming up with an effective tool\n\nI agree that this was a massive undertaking and should be recognized as such. But I feel like for \"engineering efforts\", there's the benchmark & dataset track.\n\n> demonstrating convincingly that it is effective for EAI on several tasks\n\nSure. But was a new method used or were existing methods applied to a new dataset? IMHO the former is suitable for a NeurIPS paper, the latter isn't.\n\nBut okay, this is not a hill I'm going to die on, so I'll update my score to also recommend acceptance and the AC can decide if this paper is in the right NeurIPS track.\n",
" I saw that you added the new section on ArchitecTHOR into the main body of the paper and I appreciate that. Will update the score accordingly.\n\nRegarding \"...the majority of readers of our paper would be interested in learning more about the features of ProcTHOR...\" and \"...fewer [...] readers would be interested in the technical details...\" - aren't you now making the case here that this is more of a dataset paper and it's more about the features of the dataset and less about the technical contribution?",
" Great. I'd recommend adding the response to W.5 into the paper to make it slightly stronger.\nAnd the additional experiments and response to Q.5 are appreciated!",
" Regarding limitations: alright fair enough. My mistake - I'll adjust that in the review.\n\nRegarding answers: okay, these answers are great and I appreciate the Q.8 code example.\n\n\n",
" I thank the authors for their detailed responses and experiments to address the reviewer concerns. Most of my concerns are addressed and I'm inclined to retain my rating. Please see detailed responses below.\n\n### Related work comparison for scene generation\n* This addresses my concern. Thanks.\n\n### What is being transferred for HM3D ObjectNav?\n* In summary, the authors suggest that navigational behaviors are transferred despite the visual appearance change. The train/val curves appear to indicate this. \n* For a more thorough analysis, I would suggest that the authors perform interpretability studies like: https://openaccess.thecvf.com/content/CVPR2022/html/Dwivedi_What_Do_Navigation_Agents_Learn_About_Their_Environment_CVPR_2022_paper.html\n\n### Do scaling ablations hold true for fine-tuning?\n* It is encouraging to see benefits of scaling for RoboTHOR fine-tuning. But I would have preferred to see results on HM3D fine-tuning instead since that's where the 0-shot performance is not very indicative. It would also be more indicative of benefit of ProcThor for real-world robotics.\n* If there is sufficient time, could the authors try this? I think this is important, particularly since the authors state that the 0-shot performance scaling for HM3D is unreliable.\n\n### Speed comparisons with other frameworks\n* Thanks for these comparisons. It is good to see the navigational simulation speed is somewhat comparable with that of Habitat HM3D. \n* For the final version of the paper, it would be nice to have the navigation + interaction simulation speed comparison with Habitat-2.0 / iGibson.\n\n### Do RoomSpecs limit diversity?\n* Understood. Thanks for the clarification. \n\t\nAs a final note, I'm tending towards retaining my high-score despite the other reviews since my take on the paper is a bit different. I agree that the techniques used in ProcTHOR may not be completely novel. Also, given the scope of the effort taken, the details in the main paper may not be very satisfying with a 9-page limit. However, I feel that the biggest contributions from ProcTHOR are:\n* the engineering effort of putting together the best methods from this vast scene generation literature and coming up with an effective tool\n* demonstrating convincingly that it is effective for EAI on several tasks\n\nGiven the amount of effort spent on the ProcTHOR pipeline, it would have been tempting to go light on the experiments to demonstrate its utility. So I appreciate the experimental thoroughness despite the weaknesses cited so far (many of which have been eventually addressed anyway).\n\n",
" A common question that was posed by two reviewers, L9s9 and QxMz, was why\nwe chose to submit our paper to the main track as opposed to the dataset\nand benchmark track.\n\nThe decision to submit to the main conference track was taken by the\nauthors of the paper after a fair bit of deliberation. This paper has\ntwo key contributions: (1) A new framework for procedural generation\nwith an accompanying artist-designed test set (2) The first large-scale\ndemonstration of pre-training for Embodied AI to cover 6 benchmarks\nacross multiple tasks and simulators. The first contribution is indeed\ncentral to the paper and the advantages and usage of ProcTHOR will\nlikely extend far beyond the 6 benchmark results presented in this\npaper. However, as mentioned by all reviewers, the results we have\nobtained are very strong and broad, including 0-shot performance on\ntasks that even outperforms previous SOTA results. While large scale\nstudies spanning multiple datasets and tasks are seen more often in\ncomputer vision, it is quite rare to see multiple task and multiple\nsimulator studies with SOTA results in the domain of Embodied AI.\nAnother point we would like to note is that our 0-shot results are\nsurprising and unexpectedly strong, and notable for the Embodied AI\ncommunity, where one expects that models trained on procedurally\ngenerated data will simply learn biases within the generation process\nand have a tough time generalizing 0-shot to human curated downstream\ntasks. The authors felt that given these strong results and unexpected\nfindings, we should submit this work to the main conference.",
" > Q.6) Did you mention the 16 different scene specifications somewhere? \n> Why these?\n\nIt’s in the attached code [here](https://anonymous.4open.science/r/procthor-CF0D/procthor/generation/room_specs.py).\nWe empirically found these room specs to be representative of the vast majority of single-floor houses\nand supported generating highly diverse and plausible houses. Please also see our answer\nto Reviewer 2 (W2/Q2) regarding what these specifications correspond to.\n\n> Q.7) Same for the 18 different Semantic Asset groups.\n\nWe manually created semantic asset groups based on plausible\nrelationships between objects (such as chairs around tables,\ntelevisions on stands, and night stands next to beds). The\nfull list is defined in the `asset_groups` folder [here](https://anonymous.4open.science/r/procthor-CF0D/procthor/databases/__init__.py).\n\n> Q.8) You mention customizability and \"a few simple lines of\n> specifications\" but what does that actually look like?\n\nHere is a full example of what a room spec for a kitchen and \nliving room looks like:\n\n```python\nRoomSpec(\n room_spec_id=\"kitchen-living-room\",\n sampling_weight=2,\n spec=[\n LeafRoom(room_id=2, ratio=1, room_type=\"Kitchen\"),\n LeafRoom(room_id=3, ratio=1, room_type=\"LivingRoom\")\n ]\n)\n```\n\nand for a house with a bathroom, bedroom, kitchen, and living room:\n\n```python\nRoomSpec(\n room_spec_id=\"4-room\",\n sampling_weight=5,\n spec=[\n MetaRoom(\n ratio=2,\n children=[\n LeafRoom(room_id=4, ratio=2, room_type=\"Bedroom\"),\n LeafRoom(room_id=5, ratio=1, room_type=\"Bathroom\")\n ],\n ),\n MetaRoom(\n ratio=2,\n children=[\n LeafRoom(room_id=6, ratio=3, room_type=\"Kitchen\"),\n LeafRoom(room_id=7, ratio=2, room_type=\"LivingRoom\")\n ]\n )\n ]\n)\n```\n\nTerms used:\n- room_spec_id: uniquely identifies the room spec amongst a set of room \nspecs.\n- sampling_weight: specifies how frequently a given room spec should\nbe sampled, relative to others in a set of room specs.\n- room_id: makes it easy to debug the mapping between rooms in the\nroom spec and those in the final house.\n- ratio: discussed in the floorplan generation section of the appendix.\n- room_type: the type of room, such as kitchen or bathroom.\n- MetaRoom: defines a room “subtree”, which contains two or more\nchild rooms.\n- LeafRoom: a room that does not have any child rooms.\n\n> Q.9) Thanks for providing the performance table in Tab.1. That's\n> always been my main gripe with AI2THOR. But how does the process\n> distribution work? I.e. how do you run 15 processes on each of\n> the 8 GPUs?\n\nProcTHOR takes up 400 MB of space each time it is initialized to\ncreate a new Unity window. The 400 MB of space can be allocated\nto any 8 of the GPUs. For our experiments, we create 15 x 8 = 120\ninstances of ProcTHOR, evenly distributed across the 8 GPUs. On\neach GPU, the 15 instances take up about 400 MB x 15 = 6 GB of space.\nWe then use Python’s multiprocessing module to execute actions on\neach of the 120 instances in parallel, which is similar to how\nagents are trained in parallel with AllenAct.\n\n### Limitations\n\n> Limitations aren't mentioned in the main body of the paper. I\n> thought that was supposed to be included last year or 2 years ago.\n\nTo the best of our knowledge, limitations do not have to appear\nin the main body of the paper. The\n[2022 style guide](https://media.neurips.cc/Conferences/NeurIPS2022/Styles/neurips_2022.pdf)\nonly mentions limitations in L171 and does not make reference to\nit being in the main paper.\n\n>I'd add that the lighting model is still relatively simple and might not correspond to lighting conditions in real houses but this effect may be \"domain-randomized\" away.\n\n>Also, robot navigation is vastly simplified; real-world environments might have different friction surfaces, carpets with bumps, stairs connecting different rooms on the same floor, etc.\n\n>Also, real environments might have more decoration and trinkets lying around, as well as soft objects that can appear in many different configurations (a hoody thrown over a chair or splayed out on the sofa).\n\nThere is a long way to go both in simulation and agent modeling before\nwe can have faithful virtual worlds that can be considered\ninterchangeable with the real-world, and agents that can properly\nreact to all that variability. However, we believe that ProcTHOR\nwill enable progress in this direction.",
" > W.5) Validity of claims -- other simulators too small to pre-train.\n\nThank you for this helpful suggestion. In the table below we report\nthe zero-shot performance of ProcTHOR, iTHOR, and HM3D pretrained\nmodels on the ArchitecTHOR validation set. For the HM3D model we\nalso report performance after fine-tuning on iTHOR. Here we use a\nmodel trained for 195M steps in HM3D, which attains a success rate\nof 53% success rate and an SPL of 0.3045 on a 200 episode subset\nof the validation set (this checkpoint was chosen as it maximized\nthe SPL). We notice that leveraging HM3D’s additional data for\npre-training improves performance over training without it, but\nis still significantly outperformed by just training on ProcTHOR.\n\n| Model | ArchitecTHOR Val Success | ArchitecTHOR Val SPL |\n|-------|-------|-------|\n|**ProcTHOR 0-Shot** | **63.1%** | **0.469** |\n|HM3D 0-Shot | 8.89% | 0.041 |\n| HM3D Fine-Tuned on iTHOR | 45.9%| 0.3503 |\n| iTHOR 0-Shot | 36.3% | 0.262|\n\nFor OpenRooms, the 3D models of the scenes are not publicly\navailable to use. Therefore we unfortunately cannot use their\nscenes for any robotics experiments.\n\nMegaverse only includes toy environments that consist only o\nf primitive objects, such as cubes and cylinders\n([paper](https://arxiv.org/pdf/2107.08170.pdf)). Thus,\nit is most unlikely that ObjectNav agents pre-trained on\nMegaverse will be effective in household-based tasks.\n\nThe above results indicate that the size and diversity of\nProcTHOR-10k hugely benefits transfer performance.\n\n> Q.1) what do you mean by \"fully interactive\".\n\nProcTHOR inherits all its interactive functionality from AI2-THOR.\nIt currently supports manipulation that abstracts away friction-based\ngrasping. Objects are attached to the gripper when the gripper is\nsufficiently close and the grasp action is called (see the ManipulaTHOR\npaper for more details on that agent). The open/close state is not\nbinary, as openable objects can be opened fractionally by any amount.\nThere is also support for the ManipulaTHOR agent opening doors\ninch-by-inch (for an example, see:\n[https://procthor-rebuttal.netlify.app/arm-open-close.mp4](https://procthor-rebuttal.netlify.app/arm-open-close.mp4)).\n\n\n> Q.2) If all objects are rigid bodies, how do you assign mass, friction, and elasticity? Are these also procedural\n> or can they be changed?\n\nFor both the assets used in AI2-THOR’s asset library and our custom-built\nassets, such properties are manually specified on a per-asset basis,\nwhich is estimated based on the values of similar real-world objects.\nHowever, the simulator also supports changing these values to arbitrary\nnumbers at runtime. This functionality can support new research\ndirections (e.g. requiring agents to estimate the mass of objects\nby pushing them).\n\n> Q.3) What percentage of objects have these states (open/closed, etc)?\n\nAmong the 1,633 objects currently in our object database:\n- Pickupable: 678 / 1633 ~ 41.5%\n- Openable: 186 / 1633 ~ 11.4%\n- Moveable: 588 / 1633 ~ 36% - note that objects like chairs may be \n moved but not picked up by any of AI2-THOR’s current agents\n- Breakable: 217 / 1633 ~ 13.3%\n- Transparent: 31 / 1633 ~ 1.9%\n- Switched on or off: 281 / 1633 ~ 17.2%\n- Cookable: 30 / 1633 ~ 1.8%\n- Heat surfaces (e.g., microwaves that can cook objects): 90 / 1633 ~ 5.5%\n- Cold surfaces (e.g., fridges that can freeze objects): 30 / 1633 ~ 1.8%\n\n> Q.4) What is the wall-clock time for ProcTHOR training?\n\nSection F of the appendix contains details regarding the wall clock times\nfor each of the experiments. To summarize:\n- L532: ProcTHOR ObjectNav pre-training takes 5 days for 423 million steps.\n- L564: RoboTHOR ObjectNav fine-tuning takes 7 hours for 29 million steps.\n- L571: HM3D-Semantic ObjectNav fine-tuning takes 43 houses for 220 million steps.\n- L578: AI2-iTHOR ObjectNav fine-tuning takes 1.5 hours for 2 million steps.\n- L593: ProcTHOR ArmPointNav takes 3 days for 100M steps.\n- L611: ProcTHOR Rearrangement pre-training takes 4 days for 182 million steps.\n- L617: AI2-iTHOR Rearrangement fine-tuning takes 16 hours for 9 million steps.\n\nNote that the line numbers correspond to those in the originally submitted\nsupplementary materials.\n\n> Q.5) Random seeds\n\nWe reran ProcTHOR ObjectNav pre-training with 5 different random seeds\nfor 100M steps and found that the variance across seeds is quite small.\nThis measurement was performed for our 0-shot results on a set of\n1000 ObjectNav tasks divided evenly between unseen ProcTHOR val\nhomes, ArchitecTHOR val, iTHOR val, and RoboTHOR val.\n\nWe obtained:\n- Train success: 0.6787 mean, 0.0289 std\n- Val success: 0.453 mean, 0.012 std\n\n(Here, Train numbers refer to ProcTHOR train and Val refers to the 1000\ntask set detailed above.)\n\nThe train and val curves across different random seeds also follow\neach other closely. Here is an anonymous link to images of them:\n[https://procthor-rebuttal.netlify.app/random-seeds](https://procthor-rebuttal.netlify.app/random-seeds).",
" > W.2) If my questions on ArchitecTHOR are answered in the main body\n> of the paper, I'm happy to recognize ArchitecTHOR as dataset\n> contribution and increase my score.\n\nWe appreciate you pointing out that as a key contribution, ArchitecTHOR\nshould be discussed in the main body of the paper. This is a fair point.\nGiven the large number of contributions and visuals, we chose to highlight\nthe key findings in the main body of the paper. This included an overview\nof ProcTHOR, its key features and the experimental results obtained by\nmodels employing pre-training via ProcTHOR. Contributions such as\nArchitecTHOR, technical details and qualitative results were just as\nimportant, but, due to the space constraints, were presented in the\nsupplementary material. Given your feedback, we have revised our main\npaper to include ArchitecTHOR and we briefly discuss this section in\nthe context of your questions below.\n\n**(a) Why did you create ArchitecTHOR?** Since ProcTHOR is procedurally\ngenerated, we needed a test set of houses that were drawn from a\nreal-world distribution to test if models trained on ProcTHOR merely\nmemorized biases from the procedural generation, or if they were\ncapable of generalizing to real-world floorplans and object placements.\n\n**(b) What did the designers focus on in designing these spaces?**\nDesigners were tasked with designing houses that mimicked real-world\nhomes and were encouraged to pick and place assets that are typically\nobserved within such homes. They did not have access to the procedurally\ngenerated scenes when they designed ArchitecTHOR.\n\n**(c) What wasn't there yet in AI2THOR that needed to be added here?**\nAI2-THOR includes 2 interactive scene datasets: iTHOR and RoboTHOR.\niTHOR contains single-room-sized scenes whereas RoboTHOR includes\ndorm-sized maze-styled scenes that are not representative of\nreal-world-sized and styled homes. Neither of these represented\nreal-world houses that typically contain many rooms, which is why\nwe chose to hire professional 3D artists to create ArchitecTHOR.\n\n**(d) What are the statistics of the spaces in terms of floor size,\nrooms, number of objects?** ArchitecTHOR validation houses contain\nbetween 4-8 rooms, 121.4 ± 26.1 objects per house, and a typical\nfloor size of 111.1 ± 26.4 m².\n\n**(e) How does that compare to envs generated by ProcTHOR?** By comparison,\nProcTHOR-10K houses have a much higher variance, with between 1-10 rooms,\n75.7 ± 48 objects per house, and a typical floor size of 95.6 ± 74.2 m².\n\n**(f) When should I use one over the other for training or is A-THOR\nonly for evaluation?** ArchitecTHOR is meant to be used only for\nevaluation given the few number of scenes. Using these for training\nwill likely result in overfitting to those 10 houses.\n\n> W.4) Sadly most of the information on HOW ProcTHOR was made was\n> relegated to the appendix.\n\nAs mentioned above in response to your weakness W.2, our paper\ncontains several contributions, details and visuals, all of which were\ndeemed important to present to the reviewers and readers. However,\ndue to the space constraints set by NeurIPS, we had to be selective\nabout what went into the main paper and what was added to the appendix.\n\nWe posited that the majority of readers of our paper would be interested\nin learning more about the features of ProcTHOR to determine if it\nwould be applicable to their research and a fewer but important\nnumber of readers would be interested in the technical details of\nhow one creates such environments. Similarly, we posited that the\nmajority of our readers would be interested in seeing our strong\nEmbodied AI results across tasks and simulators, and a fewer but\nimportant number would be interested in finer details such as the \nnoise model employed during motion.\n\nIn addition, technical details for large projects like ProcTHOR,\nwhile valuable in text, can only be completely detailed via a\ncode release. As outlined above, we assure you that the entire\ncodebase for ProcTHOR and our experiments will be made open source. \n\nThese were the reasons why technical details for ProcTHOR were\nadded to the appendix.",
" Thank you for the insightful and valuable feedback. We appreciate the\npositive comments that the paper is well-written (“it was a blast to\nread”), ProcTHOR is a great contribution to the community, and the\nresults are impressive.\n\n> TL;DR my main concerns are (a) reproducibility/code release and\n> (b) why this wasn't submitted to the datasets/benchmarks track.\n\nWe first address the two concerns listed in the TL;DR and then address\nall other concerns below.\n\n> W.1) Why this wasn't submitted to the datasets/benchmarks track.\n\nWe have addressed this concern in the Overall Response seen above.\nPlease refer to that response.\n\n> W.3) Dataset (generator) release.\n\nWe appreciate your concern with releasing the dataset generator. We\nmention in L78 of the original submission that “ProcTHOR will be\nopen-sourced and the code used in this work will be released” and\nwe fully stand by that. In fact, all of our code is prepared for\nrelease, linked anonymously here:\n- ProcTHOR House Generation Code:\n[https://anonymous.4open.science/r/procthor-CF0D/](https://anonymous.4open.science/r/procthor-CF0D/).\n- The ProcTHOR-10K dataset, consisting of the 10K houses used to\ntrain the agents in this paper, is available here:\n[https://anonymous.4open.science/r/procthor-10k-27FB/](https://anonymous.4open.science/r/procthor-10k-27FB/)\n- The code to train the agents is available here: [https://anonymous.4open.science/r/procthor-training-6FDD](https://anonymous.4open.science/r/procthor-training-6FDD)\n\nAfter the double-blind review period has concluded, we will share the\nlink to the open-sourced code-base (licensed under the Apache 2.0\nlicense, making the assets and scenes broadly available for both\ncommercial and non-commercial work). We commit to withdrawing the\npaper if the code is not available by September 14th.\n\n> The ProcTHOR generator does not translate to any other domain.\n\nThe human annotation that was required to create the scene generator\nfor houses was roughly several hours once the infrastructure was in\nplace. This included creating semantic asset groups, labeling where\nobject types can be placed, and creating room specs. Our motivation\nfor procedurally generating houses was to create a strong pre-training\ndataset suitable for well studied downstream tasks in Embodied AI,\nwhich tend to presently focus on household environments.\n\nHowever, adapting the scene generator to environments beyond houses\nis fairly minimal and reasonably fast. For instance, to generate\nclassrooms, one could use the same process, just defining new room\nspecs, potentially adding and labeling new objects, and creating\nsemantic asset groups for co-pairings such as like chairs attached\nto desks.",
" > W4 and Q4) What is being transferred when the visual appearance is\n> significantly different (like HM3D-Semantic ObjectNav)?\n\nThis is a very interesting question. We conjecture that large-scale\npre-training enables the learning of useful navigation primitives\nthat rely less on scene memorization due to the diversity and scale\nof the pre-training dataset. When evaluating 0-shot on visually\nin-domain data (iTHOR, RoboTHOR and ArchitecTHOR), agents perform\nextremely well, often outperforming past SOTA models that relied\non the training data from those benchmarks. HM3D on the other hand\ncan be considered out of domain from a visual standpoint.\nThis likely leads to less impressive 0-shot performance. However,\ntraining for a few million steps tunes the model towards the visual\nattributes of HM3D’s observations and when this is combined with\nthe navigation primitives learnt during training, leads to improved\nresults.\n\nFurthermore, based on empirical observations, pre-training on ProcTHOR\nseems to avoid overfitting during fine-tuning, possibly due to the\ntransfer of navigation abilities from one simulator to another. Here\nare the training curves on HM3D from a model pre-trained with ProcTHOR\ncompared to one trained from scratch:\n[https://procthor-rebuttal.netlify.app/hm3d-curves](https://procthor-rebuttal.netlify.app/hm3d-curves).\n\n> W5 and Q5) Do the scaling ablations hold true when models are\n> finetuned? Does the lack of consistent scaling for HM3D-Semantic\n> ObjectNav reflect poorly on the ability to use ProcThor to benefit\n> real-world robotics?\n\nTable 3 presents ablation results in a 0-shot setting in order to\navoid having to fine-tune 16 different models, which would be\ncomputationally very expensive. However, the reviewer does ask a\nvalid research question, and hence we present numbers for 10 and\n10k ProcTHOR pre-trained models when fine-tuned on RoboTHOR for\nthe task of ObjectNav. As seen, jumping from 10 to 10k provides\na huge improvement not just for 0-shot but also for fine-tuning. \n\n| Number of Training Houses |RoboTHOR Fine-Tuned Success Rate | RoboTHOR Fine-Tuned SPL |\n| ----------- | ----------- |----------- |\n| 10 | 37.2% | 0.303 |\n|10,000 | 56% | 0.441|\n\nAs mentioned in W4 above, 0-shot numbers for HM3D aren't as impressive\nas the other benchmarks, likely due to the visually out of domain\nnature of HM3D compared to ProcTHOR. Note that ProcTHOR pre-training\nis still very beneficial, but requires a little bit of fine-tuning\nto shift the model towards HM3D visually. In light of this, we do\nnot recommend reading too much into the 0-shot HM3D improvements\nfrom 10 to 10k houses; they are all fairly low and differences here\nare less meaningful.",
" > W3 and Q3) How do rendering speeds compare to other frameworks like\n> AI2Thor, iGibson, Gibson, Habitat, Habitat-2.0, etc?\n\nBefore moving to other comparisons, we should first say: ProcTHOR is\nbuilt within AI2-THOR and is identical in speed to AI2-THOR. The only\ncomplication here is that ProcTHOR houses can vary significantly in\nsize and, as shown in Table 1, larger houses generally result in lower\nFPS. The iTHOR scenes from AI2-THOR are all one-room houses and are\napproximately equivalent to the \"Small\" houses from Table 1.\n\nRegarding other comparisons, this is a great question and is\nsurprisingly challenging to answer for several reasons:\n\n1. Different simulators support different agents, each with their\nown action spaces and capabilities, with little standardization\nacross simulators. AI2-THOR, and thus ProcTHOR as well, supports\nthree different agent types: \"high-level\", \"locobot\", and \"arm\".\nThe \"arm\" agent is often slower to simulate than the navigation-only\n\"locobot\" agent as it is more complex to physically model a 6 DoF arm\nas it interacts with objects. This is made even more complex when\nnoting that random action sampling, the simplest policy with which\nto benchmark, is a poor profiling strategy as some actions are only\ncomputationally expensive in rare, but important, settings;\nfor instance, computing arm movements is most expensive when the\narm is interacting with many objects, these interactions are rare\nwhen randomly sampling but we'd expect them to dominate when using\na well-trained agent.\n\n2. Some simulators are relatively slow when run on a single process\nbut can be easily parallelized with many processes running on a\nsingle GPU, e.g. AI2-THOR. Thus single-process simulation speeds\nmay be highly deceptive as they do not capture the ease of scalability.\n\n3. When training agents via reinforcement learning, there are a large\nnumber of factors that bottleneck training speed and so the value\nof raw simulator speed is substantially reduced. These factors include:\n - Model forward pass when computing agent rollouts.\n - Model backward pass when computing gradients for RL losses.\n - Environment resets - for many simulators (e.g. ProcTHOR, Habitat)\n it is orders of magnitude more expensive to change a scene than\n it is to take a single agent step. This can be extremely problematic\n when using synchronous RL algorithms as all simulators will need to\n wait for a single simulator when that simulator is resetting. When\n training this means that, in practice, important \"tricks\" are employed\n to ensure that scene changes are infrequent or synchronized, without\n these tricks, performance may be dramatically lower.\n\nTo attempt to control for the above factors, we set up two profiling\nexperiments, one in Habitat HM3D and one using ProcTHOR-10K, where we:\n\n- Use a 2-GPU machine (GeForce RTX 2080 GPUs) where GPU-0 is reserved for\nthe agent's actor-critic policy network and GPU-1 is reserved for\nsimulator instances.\n\n- Train agents for the ObjectNav task (using the same LoCoBot agent with\nthe same action space).\n\n- For both agents, use the same actor-critic policy network, the same\nreferenced in the paper.\n\n- Remove the \"End\" action so that agents always take the maximum 500\nsteps, this minimizes dependence on the learned policy.\n\n- Use a rollout length of 128 with the same set of training\nhyperparameters across both models.\n\n- Use a total of 28 parallel simulator processes, this approximately\nsaturates GPU-1 memory. We found that Habitat instances used\nslightly less GPU memory than ProcTHOR instances and so we could\nlikely increase the number instances for Habitat slightly, but we\nkept these equal for more direct comparison.\n\n- Use a scene update \"trick\" which forces all simulators to advance to\nthe next scene in a synchronous fashion after every 10 rollouts (e.g.\nafter every 10 x 128 x 28 = 35,840 total steps across all simulators).\n\nWe ran the above profiling experiments for ~1M steps and we found that\ntraining with Habitat resulted in FPS ranging between 119.7-264.3\n(230.5 average) and training with ProcTHOR resulted in FPS ranging\nbetween 145.5-179.4 (167.7 average). Training in ProcTHOR is thus\nslower than in Habitat but, for the above set up, this difference\nis around 1.4x rather than what the difference in single process\nrendering speed would suggest. While we did not have the time to\nprofile Gibson, iGibson, or Habitat-2.0 in this rebuttal period,\nthese simulators are generally stated to have single-process\nrendering speeds between AI2-THOR and Habitat and so we expect\ntheir FPS numbers between the two above ranges.",
" > W2 and Q2) Does having only 16 specs limit the diversity?\n\nRoom specs are quite simple and abstract, a single room spec outlines\nthe rooms present in a house along with some connectivity constraints.\nFor example, a single room spec might be a house with 3 beds, 2 baths,\na kitchen, and a living room. As these specs are so generic, they can\ngenerate an unbounded set of houses with unique floorplans and object\nplacements. Hence, while using 16 specs does impose some constraints\non the types of houses that can be generated (e.g. we did not have a\n\"house\" that is just two connected bathrooms), the amount of diversity\nis still extremely high. \n\nIf downstream tasks and environments contain houses unsupported by the\npresent 16 specs, practitioners can easily add new specs manually\nand generate large numbers of diverse houses pertaining to those new specs.",
" Thank you for the insightful and valuable feedback. We appreciate the\npositive comments that the work is very impactful for the embodied\nAI literature and beyond, the efficiency of generating scenes is\nimpressive, both the zero-shot and fine-tuning experiments are\nimpressive, and the paper is well-written.\n\n\n> W1 and Q1) Is the scene generation process novel? Could the authors do\n> a detailed comparison of different steps to existing literature?\n> This is essential for understanding ProcTHOR and improving it in the\n> future.\n\n\nThank you for the suggestion and the paper reference. We agree that a\nmore detailed comparison would be useful. Therefore, we have updated\nthe scene synthesis section of the related works in Section 2 and\nAppendix B.12 now includes a detailed comparison of the different\nsteps of our scene generation process to the literature.\n\nTo summarize, work on scene synthesis is typically broken down into\ngenerating floorplans and sampling object placement in rooms. Our\nwork aimed to generate diverse and semantically plausible houses\nusing the best existing approaches or building on existing works\nin areas that were insufficient for our use case. Our floorplan\ngeneration process is adapted from [1,2], which takes in a high-level\nspecification of the rooms in a house and their connectivity\nconstraints, and randomly generates floorplans satisfying these\nconstraints. Our object placement is most similar to [3, 4, 5, 6, 7],\nwhere we iteratively place objects on floors, walls, and surfaces and\nuse semantic asset groups to sample objects that co-occur (e.g.,\nchairs next to tables). The modular generation process used in this\nwork makes it easy to swap in and update any stage of our house\ngeneration pipeline with a better algorithm. In this work, we found\nthe procedural generation approaches to be more reliable and flexible\nthan the ones based on deep learning when adapting it to our custom\nobject database and when generating more complex houses that were out\nof the distribution of static house datasets [8, 9, 10]. For a more\ndetailed comparison, including a discussion of some of the limitations\nof deep learning approaches, please refer to the Appendix B.12.\n\n[1] Lopes, R., Tutenel, T., Smelik, R. M., De Kraker, K. J., & Bidarra, R. (2010, November). A constrained growth method for procedural floor plan generation. In Proc. 11th Int. Conf. Intell. Games Simul (pp. 13-20). Citeseer.\n\n[2] Marson, F., & Musse, S. R. (2010). Automatic real-time generation of floor plans based on squarified treemaps algorithm. International Journal of Computer Games Technology, 2010.\n\n[3] Zhang, S. K., Xie, W. Y., & Zhang, S. H. (2021). Geometry-based layout generation with hyper-relations among objects. Graphical Models, 116, 101104.\n\n[4] Germer, T., & Schwarz, M. (2009, December). Procedural Arrangement of Furniture for Real‐Time Walkthroughs. In Computer Graphics Forum (Vol. 28, No. 8, pp. 2068-2078). Oxford, UK: Blackwell Publishing Ltd.\n\n[5] Yu, L. F., Yeung, S. K., Tang, C. K., Terzopoulos, D., Chan, T. F., & Osher, S. J. (2011). Make it home: automatic optimization of furniture arrangement. ACM Transactions on Graphics (TOG)-Proceedings of ACM SIGGRAPH 2011, v. 30,(4), July 2011, article no. 86, 30(4).\n\n[6] Xu, K., Chen, K., Fu, H., Sun, W. L., & Hu, S. M. (2013). Sketch2Scene: Sketch-based co-retrieval and co-placement of 3D models. ACM Transactions on Graphics (TOG), 32(4), 1-15.\n\n[7] Chang, A., Savva, M., & Manning, C. D. (2014, October). Learning spatial knowledge for text to 3D scene generation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 2028-2038).\n\n[8] Fu, H., Cai, B., Gao, L., Zhang, L. X., Wang, J., Li, C., ... & Zhang, H. (2021). 3d-front: 3d furnished rooms with layouts and semantics. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 10933-10942).\n\n[9] Wu, W., Fu, X. M., Tang, R., Wang, Y., Qi, Y. H., & Liu, L. (2019). Data-driven interior plan generation for residential buildings. ACM Transactions on Graphics (TOG), 38(6), 1-12.\n\n[10] LIFULL Co., Ltd. (2015): LIFULL HOME'S Dataset. Informatics Research Data Repository, National Institute of Informatics. (dataset). https://doi.org/10.32130/idr.6.0\n",
" > W3) Procedural generation evaluation: it would have helped to have evaluation with respect to other competing\nprocedural generation schemes (such as those cited in Sec. 2), or with\nrespect to the various design choices within the ProcTHOR environment\nitself.\n\nThank you for the suggestion. We have added a detailed comparison of our\nwork to others in the literature in Appendix B.12, including discussions\nof potential tradeoffs with alternative approaches. Based on the results\nin this work, we conjecture that there are many possible procedural\ngeneration schemes that can be used to effectively train agents. The\ngoal of our work was to build an initial algorithm that generates diverse\nand semantically plausible houses using the best existing approaches or\nbuilding on existing works in areas that were insufficient for our use\ncase. At every step in building ProcTHOR, we performed local studies of\nvarious algorithms and chose the approach that, qualitatively, resulted\nin high-quality generations.\n\nOur floorplan generation algorithm is based on [1], which provides a way\nto procedurally generate diverse and plausible floorplans without any\nexternal data. We chose this approach because it only requires a room\nspec and an interior boundary, and doesn’t rely on an external database\nof floorplans to synthesize one. Thus, it is trivial to scale to include\nnew room types (e.g., garages, balconies, stairways) and generate any\ntype of home (e.g., from studio apartments to massive multi-family homes)\njust by modifying the room specs. [2, 3, 4] train a network to generate\nfloorplans, but they do not support inputting any preferences about the\nnumber of rooms or the types of rooms in the scene. [5] supports passing\nin constraints, but it cannot generalize to new room types not seen\nduring training, or to massive multi-family homes.\n\nMost work on object placement [6, 7, 8, 9] leverages priors about where\nobjects are placed in large 3D scene datasets, such as 3D-Front or SUNCG.\nThe works assume a fixed object database while training the priors and\ngenerating novel scenes. Therefore, we cannot easily adapt such approaches\nto our work as ProcTHOR’s object database is completely different and\nour database does not have massive amounts of 3D scenes with example\nobject placements.\n\nGiven the huge engineering effort and computational costs in designing\nour procedural generation system, generating the ProcTHOR-10k dataset,\nand training a large collection of embodied AI models; a comprehensive\nquantitative ablative study of different generation algorithms is beyond\nthe scope of this work.\n\n[1] Lopes, R., Tutenel, T., Smelik, R. M., De Kraker, K. J., & Bidarra, R. (2010, November). A constrained growth method for procedural floor plan generation. In Proc. 11th Int. Conf. Intell. Games Simul (pp. 13-20). Citeseer.\n\n[2] Nauata, N., Chang, K. H., Cheng, C. Y., Mori, G., & Furukawa, Y. (2020, August). House-gan: Relational generative adversarial networks for graph-constrained house layout generation. In European Conference on Computer Vision (pp. 162-177). Springer, Cham.\n\n[3] Nauata, N., Hosseini, S., Chang, K. H., Chu, H., Cheng, C. Y., & Furukawa, Y. (2021). House-gan++: Generative adversarial layout refinement network towards intelligent computational agent for professional architects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13632-13641).\n\n[4] Wu, W., Fu, X. M., Tang, R., Wang, Y., Qi, Y. H., & Liu, L. (2019). Data-driven interior plan generation for residential buildings. ACM Transactions on Graphics (TOG), 38(6), 1-12.\n\n[5] Hu, R., Huang, Z., Tang, Y., Van Kaick, O., Zhang, H., & Huang, H. (2020). Graph2plan: Learning floorplan generation from layout graphs. ACM Transactions on Graphics (TOG), 39(4), 118-1.\n\n[6] Zhang, S. K., Xie, W. Y., & Zhang, S. H. (2021). Geometry-based layout generation with hyper-relations among objects. Graphical Models, 116, 101104.\n\n[7] Wang, K., Lin, Y. A., Weissmann, B., Savva, M., Chang, A. X., & Ritchie, D. (2019). Planit: Planning and instantiating indoor scenes with relation graph and spatial prior networks. ACM Transactions on Graphics (TOG), 38(4), 1-15.\n\n[8] Wang, X., Yeshwanth, C., & Nießner, M. (2021, December). Sceneformer: Indoor scene generation with transformers. In 2021 International Conference on 3D Vision (3DV) (pp. 106-115). IEEE.\n\n[9] Paschalidou, D., Kar, A., Shugrina, M., Kreis, K., Geiger, A., & Fidler, S. (2021). Atiss: Autoregressive transformers for indoor scene synthesis. Advances in Neural Information Processing Systems, 34, 12013-12026.\n\n> Another aspect that could be relevant for future exploration is to optimize aspects of the procedural generator itself, to maximize performance on a set of downstream tasks. A somewhat recent example of this is observed in \"Meta-Sim: Learning to Generate Synthetic Datasets\".\n\nThank you for the suggestion! We have incorporated this work into our future work discussion.",
" Thank you for the constructive and insightful feedback. We appreciate\nthe supportive comments about the paper being “extremely well-written”\nand the work being “thoroughly well-executed”. We address your concerns\nbelow.\n\n> W1) Why not submit to the datasets track?\n\nWe have addressed this concern in the Overall Response above. Please\nrefer to that response.\n\n> W2) Focus on RGB-only agents\n\nProcTHOR supports all sensors supported by AI2-THOR, which includes RGB,\ndepth, segmentation masks, etc. Our primary motivation to evaluate\nRGB-only agents were to choose the hardest sensor configuration (as you\nmentioned) as well as choose the sensors that were most easily available\nand reliable in the real-world. While depth sensors are available, they\nwork at lower resolutions than RGB, still tend to be fairly noisy and\noften have missing depth information across several pixels and regions\nin the image.\n\nWe also note that our presented RGB-only results outperform past works,\nsome of which also include more sensors such as depth.\n\nIn response to your request, we provide experimental results for\nArmPointNav employing the RGB-D sensor. Due to time limitations, we\nhave run the RGBD training for 44M frames (vs 100M in the original\npaper). The experiment is still running; we will update the numbers\nwith the latest results. The results thus far align with\nthe findings of [1, 2] that RGB-D doesn’t provide significant\nimprovements over using RGB only using present-day models and that\nfuture work should investigate new architectures that can use RGB\nand Depth information more effectively.\n\n| Training on | PuSR |SR |\n| ----------- | ----------- |----------- |\n| ProcTHOR-RGB @ 100M | 74.8 |37.9|\n| ProcTHOR-RGBD @ 44M |68.3 |33.1||\n\nEdit (Aug 9): Training to 100M steps with RGBD does not improve numbers meaningfully. As noted previously, this aligns with findings from [1, 2].\n\n[1] Ehsani, K., Han, W., Herrasti, A., VanderBilt, E., Weihs, L., Kolve, E., ... & Mottaghi, R. (2021). Manipulathor: A framework for visual object manipulation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4497-4506).\n\n[2] Deitke, M., Han, W., Herrasti, A., Kembhavi, A., Kolve, E., Mottaghi, R., ... & Farhadi, A. (2020). Robothor: An open simulation-to-real embodied ai platform. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3164-3174).",
" This work presents ProcTHOR, a procedural generation schema to generate synthetic house scenes subject to user-specified constraints. ProcTHOR has been applied to generate a set of 10,000 diverse house environment (fully equipped with sampled objects, materials, and other physics/rendering attributes). Training on these procedurally generated environments achieves state-of-the-art results over a range of embodied-AI tasks that solely rely on RGB images. Strengths\n=========\n\n**S1** Embodied-AI is a very active area of research in the robot learning and computer vision communities. The tasks addressed in this paper are directly based on a number of benchmarks proposed over the last 5 years. I thus expect this work to be relevant to several members at Neurips.\n\n**S2** This paper is extremely well-written. It is easy to follow the core contributions of this work, and the range of experiments (both in terms of environments and tasks) is somewhat broad. I also skimmed through some of the technical details about the dataset and I appreciated all of the thought and work that went into preparing the manuscript and supplemntary materials. Overall, this is thoroughly well-executed work.\n\n---\n\nWeaknesses\n==========\n\nWhile the work itself is well-executed, I have concerns over the technical aspects of this paper, which need further discussion.\n\n**W1** *Core contribution a dataset?* The paper, in its current form, comes off as a dataset contribution: the core takeaway for me was that it is possible to procedurally generate as many scenes as desired; and that the combination of the procedural generation scheme and the scale it achieves results in state-of-the-art performance over a number of embodied-AI tasks. While this is a demonstration that scale (and to an extent domain randomization) results in good performance, the key contribution enabling this is the ProcTHOR generation technique, which is more of a dataset (generator). I feel that the \"datasets and benchmarks\" track at Neurips would have been a much more suitable fit for this paper. I would like to see this aspect discussed further in the author-response phase. Particularly because, the neurips guidelines (here)[https://neurips.cc/Conferences/2022/CallForDatasetsBenchmarks] and (here)[https://neurips.cc/Conferences/2022/ReviewerGuidelines] focus on \"algorithmic advances, analysis, and applications\"; and I have a hard time reconciling with either of these categories in terms of technical contributions of this submission.\n\n**W2** *Focus on RGB-only agents*: The current submissions only evaluate RGB-only agents across environments. While this is arguably a harder task than the scenario where RGB-D data is available, several researchers interested in using the ProcTHOR environment are also likely interested in leveraging RGB-D sensors. It is therefore important to analyze performance on various other sensor configurations, and evaluate the gains ascribed by ProcTHOR when varying sensing modalities.\n\n**W3** *Procedural generation*: Building on **W1**, if the procedural generation scheme itself were to be considered a core contribution, it would have helped to have evaluation with respect to other competing procedural generation schemes (such as those cited in Sec. 2), or with respect to the various design choices within the ProcTHOR environment itself.\n\n---\n\n(Another aspect that could be relevant for future exploration is to optimize aspects of the procedural generator itself, to maximize performance on a set of downstream tasks. A somewhat recent example of this is observed in \"Meta-Sim: Learning to Generate Synthetic Datasets\"). I would like to see **W1**, **W2**, and **W3** discussed in the rebuttal phase I find the discussion satisfactory",
" The paper presents ProcThor, a framework to sample virtual and interactive 3D environments from an underlying distribution of room and object layouts. In the current work, 10000 3D environments of varying sizes, # rooms, and object distributions are sampled and enable simulation for object search and manipulation tasks. The experiments demonstrate that training policies on the synthetically generated environments and the finetuning it on other datasets like RoboThor, AI2Thor, and Habitat-Matterport3D lead to state-of-the-art performances. Furthermore, the ablation experiments reveal that the transfer task performance continues to improve as more and more virtual 3D environments are sampled for training. # ----------------------- Post-rebuttal update --------------------\nThe authors' responses during the rebuttal addressed majority of my concerns and I'm retaining my rating at 8. I request that the authors update the paper with the reviewer feedback where appropriate. \n\nI'd also like to see the HM3D-objectnav scaling experiments under fine-tuning settings in the final version of the paper (or even in supp). I feel like that would partially answer the question of whether synthetically generating training environments can enable meaningful performance-scaling in real-world like scenes (with very different visual characteristics). \n\n\n# ----------------------- Pre-rebuttal review --------------------\n# Strengths\n- The paper is well-written and easy to understand. The supplementary gives detailed information about generating scenes and helps clarify any ambiguity.\n- The framework developed is very extensive and is a massive feat of engineering (as detailed in supplementary). The authors have shared the code for their experiments and promised to open source it. I expect this to be very impactful for the embodied AI literature and beyond. \n- L208 - The efficiency of generating scenes is also impressive. 10k scenes were generated in 1 hour with 4 GPUs. The scenes are also efficient to store in a small JSON file per scene.\n- The experiments are well designed to cover 3 tasks from multiple online leaderboards. Both the zero-shot and finetuning experiments are impressive, especially considering that the agent only relies on RGB sensing (and not depth or panoramic sensing like some prior work). \n\n# Weaknesses\n- The novelty of the scene generation process is a bit unclear. My impression is that regardless of the novelty of each step of the pipeline, this framework on its own is a massive feat of engineering. Nevertheless, it would be useful to do a careful related works comparison to different steps of the pipeline. For example, the Semantic Asset Groups are similar to hyper-relations from [1].\n- In supplementary Sec. B2, the authors state the use of only 16 room specs for generating the 10000 scenes. This raises a question of diversity for generation. It's also not clear how room specs were obtained in the first place. \n- Table 1 - the rendering speeds are good, but there is no comparison to existing simulation platforms like iGibson, Gibson, Habitat, Habitat-2.0, etc. I feel it is important to benchmark these under a consistent hardware setup. It is okay even if ProcThor is slower than any of these individual frameworks since none of them support procedural generation of interactive scenes (and on such a large scale). \n- Table 2 - in the transfer results to HM3D-Semantic ObjectNav, the zero-shot performance is very low, but the fine-tuned performance is state-of-the-art. I'm not sure I understand what is being transferred here (visual representations? Navigation ability? Object-based scene priors? Ability to stop?)\n- Table 3 - the ablation study is good, but it leaves two questions unanswered. The results are all zero-shot, so it is not clear if the trends hold true when models are finetuned. Also, the HM3D zero-shot performance isn't consistently increasing. This raises the question of whether this large-scale procedural generation will actually help for real-world robotics.\n\n[1] Zhang, Shao-Kui, Wei-Yu Xie, and Song-Hai Zhang. \"Geometry-based layout generation with hyper-relations among objects.\" Graphical Models 116 (2021): 101104.\n - Is the scene generation process novel? Could the authors do a detailed comparison of different steps to existing literature? This is essential for understanding ProcTHOR and improving it in the future.\n- How are room specs obtained (and are they realistic)? Does having only 16 specs limit the diversity?\n- How do rendering speeds compare to other frameworks like AI2Thor, iGibson, Gibson, Habitat, Habitat-2.0, etc?\n- What is being transferred when the visual appearance is significantly different (like HM3D-Semantic ObjectNav)?\n- Do the scaling ablations hold true when models are finetuned? Does the lack of consistent scaling for HM3D-Semantic ObjectNav reflect poorly on the ability to use ProcThor to benefit real-world robotics?\n The authors have adequately addressed this.",
" The work introduces ProcTHOR, a procedural indoor environment generator. Each environment generated by ProcTHOR is a plausible environment with a physics engine, which can be used for training agents to solve various indoor tasks. The authors also show that pretraining on a sample of 10k homes from this generator, either in a zero-shot setting or after fine-tuning, the trained agent outperforms top contributions on several leaderboards. TL;DR my main concerns are (a) reproducibility/code release and (b) why this wasn't submitted to the datasets/benchmarks track.\n\n### Strengths:\n- **S.1)** The paper is really well written and structured. It was a blast to read. All the illustrations and plots are meaningful and illustrate the work.\n- **S.2)** The contribution of ProcTHOR itself is great and really needed by the community. \n- **S.3)** The RL results look great. The fact that with a bit of fine-tuning (and sometimes without), this outperforms many baselines is impressive.\n\n### Weaknesses:\n- **W.1)** This really should've been submitted to the dataset & benchmarks track and I don't understand why the authors didn't do that. Because now I have to evaluate this for its methodological contribution and there isn't any. To my best knowledge, the ProcTHOR generator was pieced together with various artist assets, heuristics, and a lot of human annotation. That isn't a method that translates to any other domain and the authors also make no claim that their ProcTHOR can easily be adapted to car assembly factories for example. The experiments that are shown at the end of the paper, that I positively highlighted in (S.3), aren't methodologically new. The authors even go so far as to say they deliberately used a simple agent network to highlight the benefits of the dataset. Ultimately, the decision is not on me if this fits in into the main track of the conference, so I'll refer this to the AC.\n- **W.2)** Part of reviewing a paper for me is checking if the contributions claimed by the authors live up to the methods and results explained in the paper. Sadly, ArchitecTHOR (great name btw) doesn't do that. It's not even mentioned in the main body of the paper. Why is this listed as contribution if the main paper doesn't even discuss it? Sure, there are some pictures in the appendix but the main paper has to stand on its own - the appendix is for clarification and additional detail. (a) Why did you create ArchitecTHOR? (b) What did the designers focus on in designing these spaces? (c) What wasn't there yet in AI2THOR that needed to be added here? (d) What are the statistics of the spaces in terms of floor size, rooms, number of objects? (e) How does that compare to envs generated by ProcTHOR? (f) When should I use one over the other for training or is A-THOR only for evaluation? If these questions are answered in the main body of the paper, I'm happy to recognize ArchitecTHOR as dataset contribution and increase my score.\n- **W.3)** The authors make the vague promise that at some point, code will be released publicly. This is a dataset paper and I have no way to reproduce the hundreds if not thousands of person-hours that went into this. If this is not reproducible and not publicly available, it's of no use to the scientific community. If this was a paper introducing a new method, this would be different because, from the mathematical description of the method, I'd be able to at least reproduce some of it. Since this is not the case, I have to make a deduction in my rating. I have to insist that you release the dataset (generator) if you want us to accept a dataset paper.\n- **W.4)** In (S.1) I praised the writing and structure and I stand by that, but this was made at the sacrifice of some essential details. Sadly most of the information on HOW ProcTHOR was made was relegated to the appendix. I understand that this is due to space constraints but it kind of diminishes the value of the main paper with respect to the appendix. \n- **W.5)** Validity of claims: In the related works section, you mention HM3D, OpenRooms, and Megaverse and your main criticism of these is that they're either too small or too game-like but you don't verify that claim as far as I can tell. A way to verify this would be to use these environments for pretraining, instead of ProcTHOR-10k, and check the 0-shot and fine-tuning performance, at least in object-nav tasks (because yes, some benchmarks require interaction, and that's not offered in these but that's beside the point). This way, you can verify that (a) the size/diversity of ProcTHOR-10k is necessary and (b) the visual fidelity of ProcTHOR is necessary for this demonstrated transfer performance. - **Q.1)** I'm not sure the paper sufficiently explains what you mean by \"fully interactive\". I understand that objects can be modeled as rigid bodies, but do your robot agents actually do friction-based grasping, or are objects tethered to the gripper when the \"grasp\" action is executed with the gripper reasonably close to the object? And for shelves/fridges/things that can be opened, do they only have a binary open/closed state, or can a robot open them by an inch?\n- **Q.2)** If all objects are rigid bodies, how do you assign mass, friction, and elasticity? Are these also procedural or can they be changed?\n- **Q.3)** What percentage of objects have these states (open/closed, etc)?\n- **Q.4)** In the appendix, you mentioned how many frames of training are required but how does that translate to wall clock time, i.e. how long did you train a policy for on ProcTHOR-10k until convergence?\n- **Q.5)** It's a bit crazy to me to say that you couldn't run additional experiments to verify performance and you took the first result because they're so expensive. In RL, it's a well-known phenomenon that different seeds can lead to vastly different performance or even different implementations of the same algorithm. Could you please elaborate, or if you had time since submission, report the mean/std of your results and how many seeds were used?\n- **Q.6)** Did you mention the 16 different scene specifications somewhere? Why these?\n- **Q.7)** Same for the 18 different Semantic Asset groups.\n- **Q.8)** You mention customizability and \"a few simple lines of specifications\" but what does that actually look like?\n- **Q.9)** Thanks for providing the performance table in Tab.1. That's always been my main gripe with AI2THOR. But how does the process distribution work? I.e. how do you run 15 processes on each of the 8 GPUs? ~~Limitations aren't mentioned in the main body of the paper. I thought that was supposed to be included last year or 2 years ago.~~\n\nIn addition to the limitations listed in the appendix, I'd add that the lighting model is still relatively simple and might not correspond to lighting conditions in real houses but this effect may be \"domain-randomized\" away. Also, robot navigation is vastly simplified; real-world environments might have different friction surfaces, carpets with bumps, stairs connecting different rooms on the same floor, etc.\nAlso, real environments might have more decoration and trinkets lying around, as well as soft objects that can appear in many different configurations (a hoody thrown over a chair or splayed out on the sofa)\n\nEDIT: apparently limitations can go into the appendix and the authors did just that. So this is fine."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"HYArW8bFU1WE",
"Vsi1qwnDNKb",
"VmKms6fvNIwO",
"XgNJZM4m4y",
"iRB6vYvIGNGt",
"Vsi1qwnDNKb",
"quTNP5JCuUh",
"iG-XnnsV3kB",
"lYPczfQCw1D",
"JXpjVKrsxCj",
"nips_2022_4-bV1bi74M",
"XgNJZM4m4y",
"XgNJZM4m4y",
"XgNJZM4m4y",
"NihVzPgZSSY",
"g8VaCGU4CH8",
"g8VaCGU4CH8",
"g8VaCGU4CH8",
"0nsuTMch1lE",
"uMqyNVJymbU",
"N050GQ-RfCu",
"nips_2022_4-bV1bi74M",
"nips_2022_4-bV1bi74M",
"nips_2022_4-bV1bi74M"
] |
nips_2022_2GsQ8dyfe45 | M$^4$I: Multi-modal Models Membership Inference | With the development of machine learning techniques, the attention of research has been moved from single-modal learning to multi-modal learning, as real-world data exist in the form of different modalities. However, multi-modal models often carry more information than single-modal models and they are usually applied in sensitive scenarios, such as medical report generation or disease identification. Compared with the existing membership inference against machine learning classifiers, we focus on the problem that the input and output of the multi-modal models are in different modalities, such as image captioning. This work studies the privacy leakage of multi-modal models through the lens of membership inference attack, a process of determining whether a data record involves in the model training process or not. To achieve this, we propose Multi-modal Models Membership Inference (M$^4$I) with two attack methods to infer the membership status, named metric-based (MB) M$^4$I and feature-based (FB) M$^4$I, respectively. More specifically, MB M$^4$I adopts similarity metrics while attacking to infer target data membership. FB M$^4$I uses a pre-trained shadow multi-modal feature extractor to achieve the purpose of data inference attack by comparing the similarities from extracted input and output features. Extensive experimental results show that both attack methods can achieve strong performances. Respectively, 72.5% and 94.83% of attack success rates on average can be obtained under unrestricted scenarios. Moreover, we evaluate multiple defense mechanisms against our attacks. The source code of M$^4$I attacks is publicly available at https://github.com/MultimodalMI/Multimodal-membership-inference.git. | Accept | This work studies membership inference attacks for multimodal models. It proposes a few different attacks under different assumptions on the attack model, and evaluates them empirically.
The reviewers found the problem interesting and the paper well-written. The paper is a welcome addition to the literature on membership inference attacks and should be of interest to this conference. I would encourage the reviewers to address the feedback from the reviewers in the final version. I recommend acceptance. | train | [
"g7vgPMYGERb",
"6dTs5a-7czX",
"sak57AaABQ0",
"_d9nRmTYGIb",
"XQ-VTs3YMMu",
"MSIN-Yr3rF3",
"oRG6jjzrVC",
"MZs-PsYHmMS"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your comment. As we responded previously, we considered METEOR [r2], a metric based on precision and recall scores, as an additional metric, but there is not much difference in terms of performance in comparison to what the paper used. Due to little difference in terms of performance and space limitation, METEOR is excluded in this submission. We commit to discussing this metric in the final version.",
" For limitation 1 (about investigating better metrics), you said “it would be interesting to dive into … in the future”. But have you tried other metrics, which you can discuss (even if they did not perform very well)?\n",
" We thank the reviewer for the valuable comments and suggestions. \n\nWe will discuss the impact of applying the defense by preventing overfitting in the revised version.\n\nWe have revised Figures 4 and 6 by highlighting the three scenarios, i.e. addressing different scenarios with different text colors. Explanation would be included in corresponding captions. \n",
" We thank the reviewer for the valuable comments and suggestions.\n\n* Weakness 1 and Question 4 - theoretical analysis\n\nThanks for the good suggestion. We agree that some theoretical results will definitely strengthen the paper. However, we would like to argue that, at the current stage, our research is an empirical study, which is the first to investigate and demonstrate membership inference attacks on multi-modal models. We would like to dive into theoretical analysis in the future.\n\n* Weakness 2 - image modality information\n\nWe would like to clarify that, although the metric-based attack model predicts the membership status by comparing the output and ground truth in text modality, the image modality information is still involved in the MIA, as (i) we train shadow models with the dataset of image-text pairs; and (ii) the metrics compare the output and the ground truth of the same image. \n\n* Weakness 3 and Question 3 - evaluation on SOTA\n\nWe have further evaluated our metric-based attack and feature-based attack on FastSpeech2 [r3], which is a SOTA text-to-speech (TTS) application that takes text as input and speech/audio (Mel spectrogram) as output. We randomly pick 3,000 samples from its training dataset, LJSpeech [r4], as members and 3,000 samples from another dataset, LibriTTS [r5], as non-member samples. We use all 6,000 samples to train the multimodal feature extractor in the feature-based method. The experimental results show that the metric-based attack achieves an 86.43% success rate and the feature-based attack achieves 94.24%. We will include more details about the experimental settings and results analysis in the revised version. We have considered SOTA image captioning models, such as RefineCap [r6] and RDN [r7]. As two studies [r6, r7] would be very time-consuming to implement without publicly available code and two works [r8, r9] are not reproducible due to our current computing resources, we chose to evaluate our attack on the classic encoder-decoder image captioning model [r10]. \n\n* Weakness 4 and Question 2 - experiment description\n\nDue to the space limit, we provide the training details of the multimodal feature extractor in Section C in Supplementary Materials. In our experiment, the change in the structure of the multimodal feature extractor (MFE) in the feature-based method has no essential influence on our conclusion. Any MFE that can extract appropriate features should be able to work in the feature-based attack. Our research is the first step in the exploration of membership inference attacks on multimodal models. Here we choose one usable MFE able to effectively extract the features from two different modalities for evaluation. So, we can confirm that our feature-based method is able to infer membership information. We might further study the influence on the structure of MFE. As we are the first to investigate membership inference attacks on multimodal models, to the best of our knowledge, there is no similar work that could be fairly considered as a baseline. In such a situation, we followed the approach in recent research on membership inference attacks [r11, r12] and set the baseline as random guessing.\n\n* Question 1 - encoder\n\nDifferent encoders in target models may yield different results. In our work, we investigate image captioning models with two different encoders, respectively based on the structure of Resnet-152 and VGG-16. The results show that the image captioning models with Resnet encoder are slightly more vulnerable to our attacks, where the attack success rate on the target model with Resnet encoder is 0.4%(in average) higher than the attack success rate on the target model with VGG encoder. The reason is perhaps, as the network structure of Resnet is deeper than VGG, the Resnet encoder may extract more representative features and thus benefits from the membership inference attack. However, the scope of our current research focuses on the empirical study of membership inference attacks on multimodal models, but it is definitely worthy of diving into this area in the future.\n\n[r3] Investigating on Incorporating Pretrained and Learnable Speaker Representations for Multi-Speaker Multi-Style Text-to-Speech. ICASSP, 2021.\n\n[r4] The LJ Speech Dataset. https://keithito.com/LJ-Speech-Dataset/, 2017\n\n[r5] LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech. Interspeech, 2019\n\n[r6] RefineCap: Concept-Aware Refinement for Image Captioning. CoRR, 2021.\n\n[r7] Reflective Decoding Network for Image Captioning. ICCV, 2019.\n\n[r8] X-Linear Attention Networks for Image Captioning. CVPR, 2020.\n\n[r9] ClipCap: CLIP Prefix for Image Captioning. CoRR, 2020.\n\n[r10] Show and Tell: A Neural Image Caption Generator. CVPR, 2015.\n\n[r11] Membership Inference Attacks against Machine Learning Models. IEEE Symposium on Security and Privacy (Oakland), 2017.\n\n[r12] Membership Inference Attacks against Recommender Systems. ACM CCS, 2021.\n",
" We thank the reviewer for the valuable feedback and address the concerns as follows.\n\n* Weakness - evaluation on non-overfit models\n\nIn Section 5 of the paper, we present the performance of applying attack on the non-overfit models (i.e. models that are enhanced by overfitting prevention methods). Moreover, in the supplementary materials, we have tested our attacks on the model that is trained on the whole COCO2017 dataset, which is a non-overfit model (performs quite well on the test set with a BLEU score achieving 0.677). The experimental result shows that the proposed attack performs well on non-overfitted models (as shown in Figure 10 and Section D in Supplementary Material). \n\n* Question 1 - ROC and data augmentation\n\nFollowing the suggestions from Carlini et al. [73], we report the true positive rate and false positive rate in the evaluation of membership inference attack. We will update ROC with log-scale in the revised version (as shown in Figure 2 in the updated Supplementary Materials). \nData augmentation can be used to improve the attack. In the feature-based method, we trained the multimodal feature extractor (MFE) with data augmentation [r1]. The average attack success rate of data augmented MFE is 72.69% (in all scenarios), while the feature-based attack without data augmentation training achieves 69.51% on average (as shown in Figure 6). We will add more details of the experiment in the revised version.\n\n* Question 2 - training dataset overlapping\n\nIn unrestricted scenarios, where the shadow training dataset can be overlapped with the target training dataset, the attack performance is better than that in constrained scenarios where no overlap exists, as shown in Figure 4 and Figure 6. The reason is that more overlaps between the shadow and target training datasets may lead to a better mimicking of the target model by the shadow model. Then the thresholds learned from the shadow models could be more suitable for the target model. Therefore, if more shadow training data overlaps with the target training dataset, the attack success rate can be increased.\n\n* Limitation 1 - metrics\n\nThanks for the suggestion. We agree that it is possible for the attackers to choose more powerful metrics in our proposed metric-based attack to achieve a higher attack success rate. Considering that the BLEU scores indicate the precision of the results and the ROUGE scores indicate the recall of the results, there exists improvement space when metrics with more factors are involved. Although our research focuses on the demonstration of a metric-based MIA, it would be interesting to dive into an ablation study with more metrics considered in the future. For example, we have considered METEOR [r2], a metric based on precision and recall scores, as an additional metric, but did not use it in the final version due to its performance.\n\n* Limitation 2 - evaluate on SOTA\n\nWe have further evaluated our metric-based attack and feature-based attack on FastSpeech2 [r3], which is a SOTA text-to-speech (TTS) application that takes text as input and speech/audio (Mel spectrogram) as output. We randomly pick 3,000 samples from its training dataset, LJSpeech [r4], as members and 3,000 samples from another dataset, LibriTTS [r5], as non-member samples. We use all the 6,000 samples to train the multimodal feature extractor in the feature-based method. The experimental results show that the metric-based attack achieves an 86.43% success rate and the feature-based attack achieves 94.24%. We will include more details about the experimental settings and results analysis in the revised version.\n\n[r1] A survey on image data augmentation for deep learning[J]. Journal of big data, 2019\n\n[r2] METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. 2005.\n\n[r3] Investigating on Incorporating Pretrained and Learnable Speaker Representations for Multi-Speaker Multi-Style Text-to-Speech. ICASSP, 2021.\n\n[r4] The LJ Speech Dataset. https://keithito.com/LJ-Speech-Dataset/, 2017\n\n[r5] LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech. Interspeech, 2019\n",
" The authors proposed a Metric-based and a feature-based membership inference attack for multi-modal machine learning. The authors evaluated the proposed attacks under unrestricted, data-only, and constrained settings on various datasets and architectures to show the effectiveness of the attacks. The authors also evaluated the effect of two mechanisms (preventing overfitting and using DP) to mitigate MIA. Strength:\nThe authors proposed the first MIA against multi-modal ML.\nThe overall presentation of the paper is well organized and easy to read. \n\nWeakness:\nThe authors propose preventing overfitting (data augmentation + l2 reg) as a mechanism to mitigate MIA. These two techniques are commonly used in ML training, and I believe the attack should be evaluated on non-overfit target models. \n 1. For Fig 8, shouldn’t the ROC be plotted in log-scale as Carlini etal. [73] suggested? Also, can data augmentation be used to improve the attack as in [73]?\n2. For the unrestricted setting, does the shadow training dataset overlap with the target training dataset? Are there any differences in the attack success rate if the shadow training data does or does not overlap with the target training dataset? \n 1. In the paper, the authors mentioned the attack is not as effective when the target model is trained on MS-COCO dataset (the attack success rate is only slightly above 50%) and the reason might be that ROUGE and BLEU scores cannot capture the semantics well. Have you considered other scoring methods? \n2. The authors only evaluated the proposed on image captioning tasks. It would be nice if the authors can show that the attack is successful on other multimodal applications. \n",
" This paper proposes an approach for inferring the membership status in the multi-modal data, called M^4I, which aims to study the privacy leakage problem in the multi-modal learning setting. Specifically, the authors introduce two attack methods for this purpose, including a metric-based and a feature-based. Authors test the proposed methods on only a task, i.e., image captioning. The experimental results show the advantage of the proposed method. Strengths:\n1. First work to explore the membership inference problem in a multimodal setting. \n2. Proposing a meaningful setting for the work.\n3. Examining performance on four real-world datasets for one task.\n\nWeaknesses:\n1. The proposed models contain limited technical contributions. For example, \n1) The metric-based model only employs two existing similarity metrics on the texts as a feature for classification. Meanwhile, the feature-based model adopts an existing pre-trained model and a two-layer neural network for classification. \n2) The reviewer is expected to see some theoretical results that can guarantee the discrimination errors. \n\n2. Although the paper claims that it is under the multimodality setting, it is strange that it misses the image modality information in the metric-based model.\n\n3. The paper only evaluates the model in one task. It is doubtful whether it can be extended to more tasks for evaluating the generalization ability. Therefore, it is hard to draw a conclusion that this method can perform well on other multimodal tasks.\n\n4. The experiment section lacks a detailed description and complete comparison. For example, \n1) In Section 3.2 and the main figures, the authors mention that the feature-based model employs a pre-trained image encoder and a text encoder. However, it does not tell the exact pre-trained model in the experiment section.\n2) The reviewer also doubts how the effect of the pre-trained models. The paper lacks sufficient experimental comparisons on different pre-trained models. \n3) The experiments should add more intuitive baselines to demonstrate the effectiveness of the proposed methods.\n 1. Will different encoders yield different results?\n2. How does the structure of the feature-based model affect the results? \n3. How does your model perform on other state-of-the-art image caption models?\n4. Can the paper present a theory to guarantee the discrimination errors?\n Yes",
" \nThe paper studies the privacy leakage of multi-modal models, proposing a membership inference attack against multi-modal models. Two attack methods are introduced, the metric-based M4I and the feature-based M4I. In metric-based M4I, the adversary can score the data and use a threshold or a binary classifier to distinguish between the scores of member data and non-member data; while in feature-based M4I, a pre-trained shadow multi-modal feature extractor is used to conduct data inference attack. The experimental results show that both attack methods can achieve strong performances. \n+ The topic related to multi-modal membership inference attacks is interesting and the privacy leakage of multi-modal models is an important security issue. As far as I know, this is the first research so far focusing on the membership inference of multi-modal models. Considering that the multi-modal model is one of the most rapidly rising technology in recent years, and its highly sensitive application scenarios, eg, medical, voices, or videos, the privacy risk in this domain is critical to the security community.\n\n+ The proposed attack methods are practical and the experimental results indicate a strong performance. I appreciate the design of different scenarios (unrestricted, data-only, and constrained scenarios), which provide a thorough evaluation of the proposed attack methods. Both the metric-based and feature-based M4I are evaluated under different scenarios using different datasets. The performance is compared with the random guess, which is still reasonable, considering that the multi-modal membership inference is a relatively new research area.\n\n+ Mitigation of the proposed attack has been evaluated and discussed\n\nIt is always good to see the evaluation and discussion related to the mitigation of a proposed attack. Three countermeasures have been involved in the experiments. I agree that, although both metric-based and feature-based attacks are affected by the privacy-enhanced training, the trade-off can not be ignored; while the overfitting results could be a potential solution. It would be better to also discuss the trade-off of applying the defence by preventing overfitting.\n\nI cannot find any major weaknesses that would keep me from accepting the paper but there are my two cents to further improve the paper.\n\n- It would be nice if the three scenarios are explained and marked in Figures 4 and 6. The current figures blend the results of three scenarios in heat maps, which is good, but it could be hard for readers to compare the attack performance.\n\n Please mark the three scenarios explicitly in Figures 4 & 6.\n See my advices above."
] | [
-1,
-1,
-1,
-1,
-1,
6,
4,
9
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"6dTs5a-7czX",
"XQ-VTs3YMMu",
"MZs-PsYHmMS",
"oRG6jjzrVC",
"MSIN-Yr3rF3",
"nips_2022_2GsQ8dyfe45",
"nips_2022_2GsQ8dyfe45",
"nips_2022_2GsQ8dyfe45"
] |
nips_2022_qtQ9thon9fV | FOF: Learning Fourier Occupancy Field for Monocular Real-time Human Reconstruction | The advent of deep learning has led to significant progress in monocular human reconstruction. However, existing representations, such as parametric models, voxel grids, meshes and implicit neural representations, have difficulties achieving high-quality results and real-time speed at the same time. In this paper, we propose Fourier Occupancy Field (FOF), a novel, powerful, efficient and flexible 3D geometry representation, for monocular real-time and accurate human reconstruction. A FOF represents a 3D object with a 2D field orthogonal to the view direction where at each 2D position the occupancy field of the object along the view direction is compactly represented with the first few terms of Fourier series, which retains the topology and neighborhood relation in the 2D domain. A FOF can be stored as a multi-channel image, which is compatible with 2D convolutional neural networks and can bridge the gap between 3D geometries and 2D images. A FOF is very flexible and extensible, \eg, parametric models can be easily integrated into a FOF as a prior to generate more robust results. Meshes and our FOF can be easily inter-converted. Based on FOF, we design the first 30+FPS high-fidelity real-time monocular human reconstruction framework. We demonstrate the potential of FOF on both public datasets and real captured data. The code is available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/FOF. | Accept | This paper received 3 positive reviews: 2xBA + A. All reviewers acknowledged that this work introduces meaningful and non-trivial contributions, it is well presented, and the claims are supported by strong empirical performance. The remaining questions and concerns were addressed in the authors' responses, which seemed convincing to the reviewers.
The final recommendation is therefore to accept. | val | [
"M6rqQb7-hlx",
"9eN2CqumnDV",
"W9AraIXswtX",
"UMGP2SxeYW",
"bLeFpXIu81w",
"5PlE-6bsopa",
"gvHTgxYFTWh",
"dHBS8kLaRHP",
"b76tx2saz3"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank authors' time and effort to answer my questions. I think the newly-added results are informative and demonstrate FOF's robustness. I maintain my score for this work.",
" Dear Reviewers:\n\nThank you very much for your time and effort in reviewing our paper. It is less than **15 hours** before the end of the Author-Reviewer Discussion and we have not received any response yet. We wonder if our replies address your concerns. Our updated supplementary document includes new experimental results you are concerned with. Thank you very much.\n\nBest regards!\n",
" Thanks for your acknowledgement for our work and the constructive comments. We will address your questions and concerns in the following:\n\n> **Q1:** I do not see some major weakness. But it would be great if authors can discuss more about the quality of the reconstruction. For example, in Fig. 4, Fig. 5, and video in the supplementary, I observe that PIFuHD usually maintains more details even though FOF-Normal already utilizes the inferred normal maps. It would be beneficial to discuss why FOF representations are not detailed than PIFuHD.\n\n**Reply:** Regarding the performance in terms of detail recovery, there are two main reasons: \n1. FOF uses bandlimited approximation (keeping only the first 31 frequencies in our implementation), inevitably discarding high-frequency components of detailed geometries. However, this also makes FOF capable of avoiding high-frequency artifacts as shown Fig. 4 and Fig. 5 of the paper. This is also consistent with the quantitative results in Table 2 of the paper: FOF-based methods have the lowest geometric errors (in Chamfer and P2S) and are marginally worse than PIFuHD in terms of the normal image error. \n2. PIFuHD uses a network with larger capacity in a coarse-to-fine pipeline, which is more computation and memory demanding. These strategies can also be used for our FOF-based reconstruction to enhance the results but would inevitably make the pipeline more cumbersome to some extent and might sacrify the merit of real-time implementation. The above discussion will be added to the final version.\n\n> **Q2:** The training loss of relative error (absolute error may not be that informative as the magnitude matters) \n\n**Reply:** Thanks for your suggestion. The training loss of relative error has been added in Sec. 8 of the supplementary document.\n\n> **Q3:** Manually add noises to the GT coefficients and plot a curve to show the correlation between the perturbed coefficients and the quantitative quality of the reconstructed meshes.\n\n**Reply:** We have manually added noises to the GT coefficients and shown the results in Sec. 9 of the supplementary document.",
" Thanks for your acknowledgement for our work and the constructive comments. We will address your questions and concerns in the following:\n\n> **Q1:** The paper in general follows the engineering details of other methods such as PIFU and ICON and the difference is mainly the simplified way of representing occupancy signals with fourier transforms. Such strategy has been seen in use in other problems, and it inhirits the defects of bandlimited approximations.\n\n**Reply:** Thanks for your careful review. Our work is not a trivial combination of Fourier transform and engineering details of other methods: \n+ As the first work to represent a 3D geometry with a 2D field in frequency domain, we enable memory efficiency, real-time implementation, high quality, robustness, and generality for monocular human reconstruction. Instead of simply adopting the discrete Fourier transform, we devise the FOF representation with Fourier series, achieving sampling scalability. With this merit, our method can adapt to different sampling rates in the inference stage without introducing systematic sampling mismatch. The benefits of analysis in frequency domain have been verified in in various fields [R1], [R2], [R3].\n+ Compared with existing methods, we provide a compact and simple framework with an image-to-image network benefitting from our FOF representation, which achieves real-time, high-quality and robust reconstruction without any engineering tricks. \n+ Our FOF has better compatibility and extensibility. In the original paper, we showed two variants, FOF-SMPL and FOF-Normal, which use SMPL as a prior or use normal maps to enhance the visual results. Please note that we present a new way to exploit the SMPL as a prior, which is completely different from PaMIR and ICON. \n\n[R1] J. Lin, Y. Liu, J. Suo and Q. Dai, \"Frequency-Domain Transient Imaging,\" in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 5, pp. 937-950, 1 May 2017, doi: 10.1109/TPAMI.2016.2560814.\n\n[R2] Wang, L. , et al. \"Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time.\" in Proceedings of CVPR2022.\n\n[R3] Changjian Zhu, Hong Zhang, Qiuming Liu, Yanping Yu, and Hongtao Su, \"Frequency analysis of light field sampling for texture information,\" Opt. Express 28, 11548-11572 (2020)\n\n> **Q2:** With the fourier transformation of 1D the occupancy field given in Equation 6, what is the error bounds of the distance between recovered v.s. groundtruth surface point?\n\n**Reply:** Following the analysis in [R4], we measure the approximation error of occupancy signals by the expected squared error, which is formulated as: $\\epsilon=E\\left[ \\left\\Vert \\hat{f}(z)-f(z) \\right\\Vert_2^2 \\right] = E\\left[ \\int_{-1}^{1} \\left| \\hat{f}(z)-f(z) \\right|^2 dz\\right]$, where $f(z), z\\in [-1, 1]$ is the original occupancy signal, and $\\hat{f}(z)$ is the approximated signal. Note that occupancy signals belong to the family of piecewise constant functions that are constant expect for a few jumps (discontinuity points). For the presence of discontinuity, the Fourier series coefficients decay by the following rate:$\\left| a_n \\right|, \\left| b_n \\right| \\sim 1/n$, where $a_n$ and $b_n$ are the cosine and sine coefficients at frequency $2\\pi n$ as defined in the paper. The linear approximation error by the first $2N+1$ terms (the same as in the paper) has the following order: $\\epsilon_N \\sim \\sum_{n=2N+1}^{\\infty}1/n^2 \\sim 1/N$, where $\\epsilon_N$ is the expected squared error with the first $2N+1$ terms for approximation [R4]. In summary, the approximation error bound of occupancy signals in FOF representation has the order of $1/N$, and the approximation would be more accurate as the increasing of the number of retained terms. The selection of N is also discussed in the reply of Q3 from Reviewer mKrW.\n\nHowever, it is difficult to analyze the theoretical bound of overall reconstruction error since our overall reconstruction pipeline does not only include the Fourier approximation, but also involves isosurface extraction using Marching Cubes, which means minor differences that do not cross the inside/outside threshold will have no impact on the reconstructed shape. This nonlinear operation also impedes strict mathematical analysis of the overall method. But, we reach a conclusion in a loose sense: the overall surface reconstruction error is lower than the bound above ($\\epsilon_N \\sim 1/N$) as the surface fitting in Marching Cubes serves as a kind of filter to further reduce the 3D geometrical errors. \n\n[R4] M. Vetterli, \"Wavelets, approximation, and compression,\" in IEEE Signal Processing Magazine, vol. 18, no. 5, pp. 59-73, Sept. 2001, doi: 10.1109/79.952805.\n\n> **Q3:** More visualization of the recovered 1D occupancy curves. It would also be nice if the authors visualize the reconstructed 1D occupancy curve given different number of fourier basis.\n\n**Reply:** We have visualized the curves in Sec. 7 of the supplementary document. \n\n",
" > **Q4:** Discussion of different ways to represent bandlimited signals and its implication in the problem of interest.\n\n**Reply:** Besides Fourier representation, there are other possible ways to represent the occupancy signals, which are not bandlimited signals but can be approximated with bandlimited signals:\n\n1. As the first work to represent a 3D geometry with a 2D field in frequency domain, we enable memory efficiency, real-time implementation, high quality, robustness, and generality for monocular human reconstruction. Instead of simply adopting the discrete Fourier transform, we devise the FOF representation with Fourier series, achieving sampling scalability. With this merit, our method can adapt to different sampling rates in the inference stage without introducing systematic sampling mismatch. The benefits of analysis in frequency domain have been verified in in various fields [R1], [R2], [R3].\n2. For fast inference, we choose the Fourier series representation, and use linear approximation of the first (2N+1) terms in our scheme for its promising sampling scalability and fast implementation. The loss of high-frequency information results in a lack of some visual details, but also makes the FOF more robust without high-frequency artifacts, which is confirmed by Table 2 in the paper.\n3. Wavelet transforms with better space(time)-frequency localization are potentially more efficient in representing signals with singularities by nonlinear approximation (picking the M largest terms). However, the linear approximation (using the first M terms) of wavelet transform has the same order of approximation error as the Fourier presentation [R4] and can blow up the dimension of the representation as mentioned in your Q5. \n\n> **Q5:** In the discussion, the authors mentioned future works could incorporate wavelet transforms to improve results for thin objects. But it seems that would blow up the dimension of the representation with an naive implementation, and its computational advantage over voxel-based representation may diminish. I would be curious to learn if the idea of utilizing basic signal processing techniques advocated by the authors could indeed be generalized to more complex scenarios.\n\n**Reply:** As discussed in the response to Q4, a straightforward application of a discrete wavelet transform would diminish several advantages of our method such as fast implementation and sampling scalability. Instead, it is possible to borrow the multiscale idea of wavelet to build a hierarchical inference structure of space-frequency representation. It is worthy of exploring nonlinear approximation to lower the reconstruction error bound while keeping fast implementation. It is also important to devise a particular family of multiscale representation to achieve sampling scalability similar to Fourier series representation.\n\nMoreover, besides human bodies, our FOF-based reconstruction can be also applied to more complex scenes. We will show some examples on the project page when the code is released and mention more potential applications in final version. \n",
" Thanks for your acknowledgement for our work and the constructive comments. We will address your questions and concerns in the following:\n\n> **Q1:** Minor improvements on paper writing: 1) A typo in Line 236 on the symbol subscript. 2) Table 1 would be much more informative if references could be added for each type of method. 3) It would be nice to specify if any post-processing techniques are used such as smoothing or connected components. \n\n**Reply:** Thanks for your careful review. We have revised the paper following your suggestions. We do not use any post-processing techniques in our implementation. All our results are produced directly by the Marching Cubes algorithm from the inferred FOF representation. This has been emphasized in the revised version. \n\n> **Q2:** There is a significant performance gap, both quantitatively and qualitatively, between vanilla FOF and the enhanced version FOF-SMPL. However, in the main methodology section, the SMPL part was somehow downplayed. If this was done to emphasize FOF as the main novelty, it would be more convincing to apply FOF to other relevant tasks. Otherwise, if the focus is on 3D human reconstruction, the SMPL part should be included as an important subsection with more details. \n\n**Reply:** Thanks for your suggestion. This work focuses on 3D human reconstruction, and we will include the SMPL part as an important subsection in the final version due to the limitation of 9 pages at this stage. \n\n> **Q3:** The experimental section would be further richened. For instance, a quantitative ablation study on the number of FOF components would be great to help understand how it works. A runtime break-down would be also nice to shed light on how fast each component is. \n\n**Reply:** Additional experiments as requested have been added in Sec. 5 and Sec. 6 of the supplementary document.\n\n> **Q4:** Regarding the number of FOF components: It would be great to have a quantitative analysis of how this affects the performance. Also, it seems this number doesn't significantly impact run time since only tensor multiplication is needed. It would be great if the authors could explain why a maximum of 15 is used, i.e., what would happen if N>100? \n\n**Reply:** Using a larger number of FOF components (N) does not affect the run time of the reconstruction part that involves only the multiplication of the basis tensor and the coefficient tensor. However, the number of channels in previous convolutional layers should be also enlarged to increase the network capacity otherwise the performance will actually drop. As a result, increasing N will also enlarges the overall network, and requires more computational and memory resources of GPU. Moreover, the training of the network will also become more difficult. As shown in ablation results suggested by Q3, the reconstruction performance reaches the knee point at N=15, and a larger N beyond will bring only marginal improvements.\n\n> **Q5:** Would the FOF representation be used in other related tasks such as general single image 3D prediction? \n\n**Reply:** Yes, our FOF representation can be used for many tasks, such as 3D shapes generation, completion of 3D human bodies and monocular 3D prediction for other shapes. We will show some examples on the project page when the code is released, and mention these potential applications in final version. ",
" The paper proposes a Fourier Occupancy Field representation to address the problem of monocular human mesh reconstruction from an image. In FOF, the 3D mesh is essentially converted into a multi-channel image that also possesses sampling scalability, stable reconstruction, and low-complexity reconstruction. Experiments demonstrate the proposed method outperforms existing methods quantitatively and qualitatively, while also enabling real-time applications. The strengths of the paper are:\n- The paper is well-written. The discussions and explanations are not only informative but also insightful.\n- The proposed FOF representation is simple and effective. The methodology is general purpose and could benefit other related tasks.\n\nThe weaknesses of the paper are:\n- Minor improvements on paper writing: 1) A typo in Line 236 on the symbol subscript. 2) Table 1 would be much more informative if references could be added for each type of method. 3) It would be nice to specify if any post-processing techniques are used such as smoothing or connected components.\n- There is a significant performance gap, both quantitatively and qualitatively, between vanilla FOF and the enhanced version FOF-SMPL. However, in the main methodology section, the SMPL part was somehow downplayed. If this was done to emphasize FOF as the main novelty, it would be more convincing to apply FOF to other relevant tasks. Otherwise, if the focus is on 3D human reconstruction, the SMPL part should be included as an important subsection with more details.\n- The experimental section would be further richened. For instance, a quantitative ablation study on the number of FOF components would be great to help understand how it works. A runtime break-down would be also nice to shed light on how fast each component is. - Regarding the number of FOF components: It would be great to have a quantitative analysis of how this affects the performance. Also, it seems this number doesn't significantly impact run time since only tensor multiplication is needed. It would be great if the authors could explain why a maximum of 15 is used, i.e., what would happen if N>100?\n- Would the FOF representation be used in other related tasks such as general single image 3D prediction? The authors have adequately addressed the limitations and potential negative societal impact of their work.",
" This paper proposed an occupancy field representation for monocular real-time human reconstruction. The proposed representation represents a 3D object with a 2D field orthogonal to the view direction. Each 2D position stores the 1D occupancy field of the object along the view direction, and the 1D occupancy field is represented by real-valued fourier series. The proposed method is able to achieve 30+FPS while achieving comparable accuracy to other monocular methods. Strength: The paper proposes a real time monocular human reconstruction method with implicit occupancy representation. The occupancy field is represented as a pixel aligned 2D grid. The novel idea of this paper is, in each grid the 1D occupancy signal is represented by real-valued fourier series.\n\nThe paper shows that 1D occupancy signals tend to be low frequency, thus can be effectively represented by the first few fourier basis functions. This reduces the output dimension of the inference model, thus effectively improves computational efficiency. \n\nWeakness: approximating occupancy with a very small number of fourier basis is still quite counter intuitive. I think the author could do a better job in analyzing the property of this approximation. It would be nice to see what is the theoretical bound of the distance to the surface given Equation 6. It would also be nice if the authors visualize the reconstructed 1D occupancy curve given different number of fourier basis. In addition, representing signals with a discrete set of uniformly sampled fourier basis is equivalent to applying sinc filter to regular-grid delta signals. In other words, there are different ways of representing a band-limited signal (e.g. spatial vs frequency representation), thus it would be helpful if the author provides some insight what is the benefits of representing signals in the frequency v.s. spatial domain. \n\nIn general, I recognize the practical advantage of the proposed method, but feels the technical / theoritical contribution of the paper is a bit thin. The paper in general follows the engineering details of other methods such as PIFU and ICON and the difference is mainly the simplified way of representing occupancy signals with fourier transforms. Such strategy has been seen in use in other problems, and it inhirits the defects of bandlimited approximations. (1) With the fourier transformation of 1D the occupancy field given in Equation 6, what is the error bounds of the distance between recovered v.s. groundtruth surface point? \n\n(2) More visualization of the recovered 1D occupancy curves.\n\n(3) Discussion of different ways to represent bandlimited signals and its implication in the problem of interest.\n\n(4) In the discussion, the authors mentioned future works could incorporate wavelet transforms to improve results for thin objects. But it seems that would blow up the dimension of the representation with an naive implementation, and its computational advantage over voxel-based representation may diminish. I would be curious to learn if the idea of utilizing basic signal processing techniques advocated by the authors could indeed be generalized to more complex scenarios. \n\n As pointed out by the author, the bandlimited approximation of the 1D occupancy field is insufficient to recover fine-details as well as thin structures. The poor results in hand and limb regions reveal such defects. ",
" This paper proposes a new representation for human reconstruction. Specifically, for a pixel in the input image, it approximates the occupancy along the casted ray as the Fourier series. The proposed approach learns the representation from data by regressing to the coefficients of the Fourier basis functions. The submission shows the efficacy and flexibilities of the representation on real captured data. ## Strengths\n\n1. Originality: the proposed representation of FOF is novel. Approximating the occupancy with Fourier series is interesting and principled.\n2. Quality: the presented reconstruction is of high-quality.\n3. Clarity: the paper is well-written and easy to follow.\n4. Significance: it is an important problem of developing an efficient and powerful 3D representation to enable real-time application. The submission demonstrates a real-time pipeline with the FOF representation.\n\n## Weakness\n\nI do not see some major weakness. But it would be great if authors can discuss more about the quality of the reconstruction. For example, in Fig. 4, Fig. 5, and video in the supplementary, I observe that PIFuHD usually maintains more details even though FOF-Normal already utilizes the inferred normal maps. It would be beneficial to discuss why FOF representations are not detailed than PIFuHD. Essentially, the training of FOF will be a per-pixel regression problem. Namely, the network needs to regress to GT coefficients $C(x, y)$ for Fourier basis functions. Since the per-pixel regression is a hard problem, I am curious about how robust the FOF representation is to the inaccuracy of such regression. Specifically, I think showing the following results would be beneficial:\n1. The training loss of relative error (absolute error may not be that informative as the magnitude matters)\n2. Manually add noises to the GT coefficients and plot a curve to show the correlation between the perturbed coefficients and the quantitative quality of the reconstructed meshes. I appreciate the authors's discussions about limitations of the proposed approach in Sec. 5, which helps the understanding of the suitable scenarios."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"W9AraIXswtX",
"nips_2022_qtQ9thon9fV",
"b76tx2saz3",
"dHBS8kLaRHP",
"dHBS8kLaRHP",
"gvHTgxYFTWh",
"nips_2022_qtQ9thon9fV",
"nips_2022_qtQ9thon9fV",
"nips_2022_qtQ9thon9fV"
] |
nips_2022_pfEIGgDstz0 | Non-rigid Point Cloud Registration with Neural Deformation Pyramid | Non-rigid point cloud registration is a key component in many computer vision and computer graphics applications. The high complexity of the unknown non-rigid motion make this task a challenging problem. In this paper, we break down this problem via hierarchical motion decomposition. Our method called Neural Deformation Pyramid (NDP) represents non-rigid motion using a pyramid architecture. Each pyramid level, denoted by a Multi-Layer Perception (MLP), takes as input a sinusoidally encoded 3D point and outputs its motion increments from the previous level. The sinusoidal function starts with a low input frequency and gradually increases when the pyramid level goes down. This allows a multi-level rigid to nonrigid motion decomposition and also speeds up the solving by ×50 times compared to the existing MLP-based approach. Our method achieves advanced partial-to-partial non-rigid point cloud registration results on the 4DMatch/4DLoMatch
benchmark under both no-learned and supervised settings. | Accept | All reviewers agree this work is a creative approach to nonrigid registration, which is particularly hard in the mesh-free point cloud setting. The discussion between the authors and reviewers was extremely productive and addressed most of the major concerns about this work.
In preparing the camera ready, the authors are encouraged to incorporate the new tables of results that appear in the discussion below into the paper and/or supplemental materials, to make sure it is archived. | train | [
"MQybYHIBaoK",
"FM4SicqE5j0",
"ZpWBxyYEaEm",
"4A-yYQcXMzar",
"LJ5AqphgpjB",
"HUvm3_PYhH8",
"oG-dtzjSYOp",
"gsKgJsUNiO8Z",
"Zyhj4IcUCl_",
"vXqyWKZso6A",
"Vuutvcw8Xmp",
"tw3X98-zvZ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The author feedback addresses most of my concerns. Taking into account the comments from other reviewers and the author feedback, I am inclined to accept this paper, but rejecting would it not be that bad.\n",
" I thank the authors for their effort to address my concerns.\n\nI have carefully read the rebuttal and other reviewers' comments. In my original review, my main concerns were about different challenging scenarios to test the robustness of the method. I see that further experiments have been provided, and I am satisfied with these additions, while the authors agree that there are limitations about significantly non-isometric cases common to the unsupervised approach. I suggest including this observation in the main manuscript limitations.\n\nI see that other reviewers, in general, agree that the idea is sound and interesting, while maybe not dramatically innovative from a general perspective. Looking forward to discussing this with other reviewers!",
" We would like to thank all reviewers for their detailed feedback!\n\nWe are glad that the reviews found our hierarchical neural deformation model is \"intuitive\"[hpiz], \"interesting\"[AmBj], \"well-motivated\" [xwkn,hpiz] and \"novel\"[iTyU];\nthe experiments are \"comprehensive\"[hpiz], includes results from \"a pretty large set\"[hpiz] or \"a variety\"[xwkn] of existing methods;\nthe proposed method \"performs well in the evaluation\"[AmBj], runs \"faster\"[xwkn], and \"outperform several SOTA methods\"[iTyU];\nthe results are \"convincing\"[iTyU];\nand the paper is also \"well written and easy to follow\"[xwkn].\n\nWe propose the first hierarchical neural deformation model for non-rigid point cloud registration.\nOur method not only demonstrates superior non-rigid registration results on public benchmarks but also runs faster than existing coordinate-networks-based approaches.\n\nTo address some concerns of the reviewers, we conduct additional experiments including:\n- Ablation study of network initialization strategies. [hpiz] \n- Ablation study of loss functions. [AmBj]\n- Evaluation on real-world benchmarks.[xwkn,iTyU]\n- Experiments on different levels of noise, occlusion/partiality, input size, etc.[xwkn,iTyU]\n\nWe encourage the reviewers to also read our responses to the other reviewers.\n\nThank you,\n\nthe authors",
" Thank you for the review and the helpful comments! Below, we answer your questions.\n\n### Why the proposed loss function is suitable for partial-to-partial registration?\nIn the supervised case, partial-to-partial registration is made possible by relying on the correspondence term (in equation 6) as the dominant loss function. The correspondence is obtained using the learning base point-to-point matching method Lepard [19]. We further use a transformer to reject match outliers. Finally, we get high-quality correspondences that are suitable for partial non-rigid registration.\n\nIn the unsupervised case, the problem you mentioned does pose a challenge for registration: minimizing chamfer distance will push all points to move towards the target, and eventually, all source points may collapse on the target. \nOur technique that potentially mitigate this issue is the deformation regularization term (in Equation. 7): it assigns lower weights to the movements of the points that break the local rigidity assumption, i.e. preventing severe collapses of the point cloud geometry. The down-weighted points will not be completely transported towards the target (c.f. Equation. 4), which allows partial alignment.\nThis regularization term reduces the end-point-error, c.f. Table 2. \nNevertheless, the unsupervised setting is still challenging especially for low overlap cases. It turns out that none of the unsupervised methods could produce reasonable results on 4DLoMatch (with overlap ratio < 45%), as already discussed in line. 214-216. \n\nWe test both L1 and L2 norms of chamfer distance as loss terms. L1 norm does improve the metrics as shown in the following table. We will add this. However, please note that the focus of this paper is proposing a hierarchical neural deformation model for non-rigid registration, but not robust loss functions.\n\n- **Ablation study of loss function in unsupervised case on 4DMatch**\n| |   EPE(cm)  |  AccS(%)   |   AccR(%)   |  Outlier(%)  |\n|-----------------------|:---------:|:---------:|:---------:|:----------:|\n| chamfer distance (L2 norm) | 20.05 | 14.34 | 29.79 | 48.4 |\n| chamfer distance (L1 norm) | **19.85** | **18.48** | **35.31** | **45.64** |\n| | | | | |",
" Thanks for reviewing our paper! In the following, we answer your questions.\n\n\n### 1. Is NDP (instead of LNDP) basically solving an optimization problem without any supervision?\nYes, NDP is solving an online optimization problem without any supervision.\n\n### 2. Does the initialization of the network weights affect the performance of the optimization problem?\nYes, changing the initialization strategy could greatly affects the results of registration. \nThe following table shows an ablation study for network initialization via Pytorch built-in functions. \nWith *.ones_* initialization, the model does not converge at all. \n*.kaiming_uniform_* and *.xavier_uniform_* initialization produce similar results.\nWe will add this. \n\n\n- **Ablation study of Pytorch init functions for NDP on 4DMatch:**\n| *torch.nn.init* | EPE(cm) | AccS(%) | AccR(%) | Outlier(%) |\n|--------------------------|:-------:|:-------:|:-------:|:----------:|\n| *.ones_* | 47706.20 | 0.00 | 0.00 | 100.00 |\n| *.zeros_* | 21.83 | 3.48 | 13.89 | 60.00 |\n| *.kaiming_uniform_* | 20.89 | **14.93** | **29.81** | 50.34 |\n| *.xavier_uniform_* (Default) | **20.05** | 14.34 | 29.79 | **48.4** |\n| | | | | |\n\nIn addition, we employ a trick to constraint the output of MLP: we apply a small scaling factor of $0.0001$ on the output of MLP, this encourages the MLP to produce a near-identity $SE(3)$ matrix at the beginning of optimization.\nWe found this trick is crucial for NDP (no-learned) but not necessary for LNDP (supervised). We will add discussions.\n\n\n",
" Thank you for your review and your questions! Below we discuss the concerns you raised in the review and answer your questions.\n \n### 1. Contribution. \n\nThe contribution of this paper is the introduction of the first neural deformation pyramid model for non-rigid point cloud registration. Existing MLP-based approaches such as Nerfies and SAPE **are black boxes models** that only **represent motion signals at a single scale**, and usually **need a large network to fit complex motion**, as a result, their **optimization is time-consuming**. \nOur method has the following advantages compared to existing MLP-based approaches: \n- **Interpretability**. The multi-scale deformation representation makes the implicit neural network more interpretable.\n- **Task simplification**. By decomposing the complicated non-rigid motion estimation task into a sequence of easier sub-tasks, we manage to simplify this problem. Results prove that this strategy yield more accurate registration than Nerfies.\n- **Optimization speedup**. By replacing the large MLP with a sequence of smaller MLPs, our method's optimization is 50 times faster than Nerfies.\n\n\n \n ### 2. Experiments on other benchmarks. \nWe add additional results on two real-world benchmarks: DeepDeform [Bozic et al.], and KITTI Scene Flow [Geiger et al.].\nWe omit 3DMatch/3DLoMatch because it is a rigid registration benchmark, while this paper focuses on non-rigid registration. \n\n- **Registration results of no-learned methods on DeepDeform benchmark.** (DeepDeform contains real-world partial RGB-D scans of dynamic objects, including humans, animals, cloth, etc. )\n| |   EPE(cm)   |  AccS(%)   |  AccR(%)   |   Outlier(%)  |\n|:------------|:-------:|:-------:|:-------:|:----------:|\n| ZoomOut [23] | 2.88 | 62.31 | 85.74 | 19.55 |\n| Sinkhorn [11] | 4.08 | 42.49 | 77.41 | 23.85 |\n| NICP [28] | 3.66 | 48.16 | 80.16 | 21.34 |\n| Nerfies [30] | 2.97 | 61.58 | 86.82 | 16.11 |\n| NDP (Ours) | **2.13** | **79.01** | **94.09** | **11.55** |\n\n\n \n\n\n- **Registration results on KITTI Scene Flow benchmark.** (KITTI contains Lidar scans in dynamic autonomous driving scenes. ) \n| |   Supervised  |   EPE(m)   |  AccS(%)   |  AccR(%)   | \n|:------------|:-------:|:-------:|:-------:|:-------:|\n| FlowNet3D [8] | Yes|0.199 | 10.44 | 38.89 | \n| PointPWC [38] |Yes| 0.142 | 29.91 | 59.83 | \n| NDP (Ours) | No| **0.141** | **47.00** | **71.20** | \n\n\n \nDeepDeform [Bozic et al.]: DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data, CVPR 2020. \nKITTI [Geiger et al.]: Are we ready for autonomous driving? the KITTI vision benchmark suite, CVPR 2012.\n\n\n \n### 3. Experiment with different level of Overlap/Partiality/Occlusion.\nWe actually already conducted such experiments, please see the results on 4DMatch/4DLoMatch benchmark in Table 1.\n\nWe want to clarify that, **in the pair wise setting, the three terms: overlap, partiality, and occlusion are connected**. \nGiven $\\text{Overlap ratio} =\\theta$, we can obtain the others by $\\text{Partiality}=\\theta$, and $\\text{Occlusion ratio}=(1-\\theta)$. \nBecause overlap ratio is defined by the the percentage of **co-visible** point between a pair of point clouds, \nby this definition, overlap ratio exactly represents the relative partiality between two point clouds.\nOcclusion ratio denotes the ratio of points that are **invisible** from another point cloud, therefore it is can be computed by $1-\\theta$. The following table shows the stats of $\\theta$ in 4DMatch/4DLoMatch. Note that our method is state-of-the-art on this benchmark. \n- **Statistics of Overlap/Partiality/Occlusion ratio in 4DMatch/4DLoMatch:**\n||    Overlap ratio / Partiality ($\\theta$ )    |   Occlusion ratio ($1-\\theta$)  |\n|:-|:-:|:-:|\n|4DMatch| 45%~92% |8%~55%|\n|4DLoMatch| 15%~45% |55%~85%|\n\n\nWe further re-group the results on 4DMatch/4DLoMatch based on different ranges of $\\theta$ with 20% interval.\n- **Registration Accuracy (AccR) at different level of overlap ratio**\n| Overlap ratio ($\\theta$) |   < 20%   |   20% ~ 40%  |   40% ~ 60%   |   60% ~ 80%  |   > 80%   |\n|-----------|:------:|:-------:|:-------:|:-------:|:--------:|\n| AccR (%) with NDP | 0.97 | 2.80 | 8.16 | 25.57 | 63.65 |\n| AccR (%) with LNDP | **19.01** | **39.32** | **63.37** | **71.91** | **89.77** |\n\n\n\n \n ### 4. Experiments with noise/outliers. \nWe have added experimental results and discussions in the 2nd paragraph at [Reply to Reviewer iTyU](https://openreview.net/forum?id=pfEIGgDstz0¬eId=gsKgJsUNiO8Z).",
" \n\n \n### 5. \"there is no detailed data about the runtime\"\nActually, we have reported the runtime of all baseline methods, please check the right-most column of Table. 1. Specifically, the runtime (s) of **Nerfies** vs **Ours** is **115.94** vs **2.31**, i.e. Ours runs 50 times faster.\n\n \n### 6. \"In line 53, what does 'via skinning’ mean here?\"\nSkinning is the idea of transforming a point cloud by a\nblend of multiple transformations. A standard algorithm is Linear Blend Skinning (LBS) which uses weighted linear combination of the transformations. We will add explanations.\n\n \n### 7. \"not enough implementation details to reproduce the method, such as the MLP structures.\"\nThe MLP structure is $(width, depth)=(128,3)$ for all pyramid levels, see line 229.\nWe also show the pseudocode and other hyper-parameters in Sec. 3.3.\nWe will clarify other configs as in the following table. \nNote that, as a part of the submission, we have included the source code and data link, which will be made publicly available.\n |  pyramid level  |  learning rate  |  activation  |  weight decay  |  momentum  |\n|:-------------:|:-------------:|:----------:|:------------:|:--------:|\n | 9 | 0.01 | Relu() | 0.0001 | 0.9 |\n\n\n \n### 8. \"why the authors optimize different levels in order instead of updating all levels simultaneously?\"\nThere are two reasons: 1) **natural non-rigid motion has a top-down dependency**: the fine level motion is built on the coarse level motion, and it appears more intuitive to estimate the fine level motion when the coarse level motion is known. 2) **updating MLPs at all levels simultaneously is time-consuming**: it requires $m$ times more computations than updating a single MLP, where $m$ is the number of levels in the pyramid. \nRefer to line.25-30 and line. 223-231 for more discussion.\n\n\n \n### 9. \"Why max_iter=500 and $\\sigma$=15 ?\"\nOn average, the MLP in a pyramid level takes around 70~150 iterations to converge, see Figure 5. Therefore, we consider a maximum of 500 iterations as a safe upper bound for optimization. By observing the optimization loss curve, we found that the loss does not go down anymore if the loss has already stay unchanged for the past 15 iterations, therefore we set $\\sigma=15$ as the convergence criteria. We will clarify.",
" Thank you for your review and your questions!\n\n \n### 1. Experiments on non-isometric deformations, cluttered scene, and point density change, in real-world data. \nWe evaluate our method on two **real-world** benchmarks: DeepDeform, and KITTI Scene Flow. \nThe results are posted in the 2nd paragraph on the [Response to Reviewer xwkn (part 1/2)](https://openreview.net/forum?id=pfEIGgDstz0¬eId=HUvm3_PYhH8).\nOur NDP produces advanced results on both benchmarks.\nIn particular, the Lidar scans in KITTI capture running vehicles, walking pedestrians, and static trees/buildings on the road, i.e., the data is cluttered, and breaks the isometric deformation assumption. The point density of Lidar scans also changes drastically from near to far.\nNDP is robust to the above factors and produces competitive results.\n\n\n\n \n### 2. Registration result at different level of noise.\nThe following table shows the registration accuracy of NDP under different ratios of point cloud noise.\nA noisy point is created via uniform perturbation of a clean point inside a ball with a radius=0.5m (the size of objects in 4DMatch range from 0.6m to 2.1m). We do not observe a significant performance drop until the introduction of 25% noise, while real-world range sensors such as the Kinect1 camera only produces 4%~6% noise, see [Tanwi et al., 2014]. This experiment, combined with the results on DeepDeform and KITTI, proves that our method is robust to noisy data in real-world scenarios. \n\n- **Registration Accuracy Relaxed (AccR) of NDP on 4DMatch under different level of noise**\n| Noise ratio |   0%   |   5%   |  10%  |  15%   |   20%   |  25%  |  30%   |  35%   |  40%   |  45%  |  50%  |\n|-|:-----:|:-----:|:-:|:-:|:-:|:-:|:-----:|:---:|:-----:|:-----:|:-----:|\n| AccR (%) | 29.81 | 28.76 | 27.74 | 27.08 | 27.00 | 24.43 | 24.25 | 24.22 | 23.89 | 22.64 | 21.83 |\n \n\n[Tanwi et al., 2014] Characterizations of Noise in Kinect Depth Images: A Review. IEEE Sensor 2014.\n\n \n### 3. Computation time at a different number of points. ### \nTo conclude first, **the time complexity is sub-linear or $\\mathcal{O}(1)$ with the number of points.** \nThe dominant computation overhead is from the network optimization, the time complexity is $\\mathcal{O} (n\\times m) $, where $n$ is the number of points and $m$ is the number of iterations required for convergence.\nIf $m$ is fixed, the time grows in $\\mathcal{O} (n)$ of the number of points.\nHowever, we found that when increasing training point $n$, the total number of iteration $m$ decreases.\nTherefore, if we use all points for optimization, the registration time grows sub-linearly. \nFor faster registration, we can optimize only sub-sampled points. This keeps a constant registration time regardless of the input size. The following table proves this argument. \n\n- **Registration time (s) at different number of input points** \n| Complexity| | 2k | 4k | 6k | 8k | 10k | 12k | 14k | 16k | 18k | 20k |\n|--|---|:----:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n| $\\mathcal{O}(n)$ | | 2.72 | 5.42 | 8.13 | 10.84 | 13.55 | 16.25 | 18.97 | 21.68 | 24.39 | 27.10 |\n| sub-linear | NDP (use all points for optimization) | 2.72 | 3.56 | 3.79 | 3.78 | 4.84 | 5.06 | 5.07 | 5.70 | 6.38 | 5.91 |\n| $\\mathcal{O}(1)$ | NDP (sub-sample 2k points for optimization) | 2.72 | 2.58 | 2.60 | 2.57 | 2.47 | 2.82 | 2.63 | 2.76 | 2.58 | 2.87 |\n \n\n\n \n### 4. Registration result at different level of point clouds partiality.\nIn pairwise setting, we use overlap ratio $\\theta$ to denote the partiality. \nThe results are posted in the 3nd paragraph on the [Response to Reviewer xwkn (part 1/2)](https://openreview.net/forum?id=pfEIGgDstz0¬eId=HUvm3_PYhH8).\n\n\n\n \n### 5. \"A's chest is mapped to B's belly\"\nThis is indeed a drawback of NDP: it is trying to found correspondence in nearest regions, which may fail under large deformation. \nBut please note that this is the result of an unsupervised method, such artifacts could be resolved with sparse landmark correspondence, e.g. using sparse chest-to-chest, belly-to-belly links as complementary training signal as in the LNDP case. We leave this as future work.\n\n\n \n### 6. Others\n- \"further details are required for implementation\": we add more implementation details at issue 7 in [Response to Reviewer xwkn (part 2/2) ](https://openreview.net/forum?id=pfEIGgDstz0¬eId=oG-dtzjSYOp), we will also publish the source code.\n- \"quantitative results for scale registration\": we are planning to create such a registration benchmark with scale changes, and report results in a revised version.\n- Thanks, we fixed the typos. \n- We added [14] in related work. \n- We cited and discussed [Marin et al., 2020] and will try to evaluate this method in a revised version.",
" This paper presents Neural Deformation Pyramid (NDP), a learning-based method for non-rigid point cloud registration.\nGiven a source point cloud and a destination point cloud, NDP learns a dense SE(3) warp field (or dense Sim(3) warp field in the case of scaling difference), that is, to learn a rigid transformation for each point in the source cloud, so that the source cloud is best aligned with the destination cloud.\n\nThe proposed NDP seeks this warp filed in a hierarchical manner. Specifically, the warp field is decomposed into multiple layers of learnable MLP's, where at each layer the input 3D coordinates are passed through sinusoidal encoding with increasing frequencies. In this way, the NDP decomposes the non-rigid transformation into rigid transformations that go from a global scale to more local refinements. I found this design to be very intuitive.\n\nExperiments compare NDP with a pretty large set of both non-learned methods and learned methods, and demonstrates the superior performance of NDP in terms of accuracy and efficiency. Strengths:\n+ The intuition of hierarchical motion decomposition is intuitive and well motivated in the introduction.\n+ The design of the network and its description is succinct and to the point.\n+ Experiments are comprehensive, and support the claimed benefits of the algorithm.\n+ Limitations are honestly acknowledged.\n Is NDP (instead of LNDP) basically solving an optimization problem without any supervision?\nThe Chamfer distance and regularization term do not seem to require any groundtruth data.\nIf so, do you find the initialization of the network weights affect the performance of the optimization problem? (in terms of quality of registration) Limitations included.",
" The paper proposes a neural model for non-rigid point cloud registration. The key idea is to decompose the desired non-rigid deformation into a hierarchy of rigid/similarity deformations, and train a neural model to predict the deformation at each level. The proposed method performs well in the presented evaluation.\n\nThe main contribution of this paper is the idea of hierarchical deformation and the proposed neural model to predict it. Strengths:\n- The idea of hierarchical neural deformation is interesting.\n- The method performs well in the evaluation.\n\nWeaknesses:\n- It is unclear why the proposed loss function is suitable for partial-to-partial registration. In particular, the chamber distance is based on sum of squared distance from each point to the other model. Minimizing this loss would attempt to reduce the distance for *every* point, including those that do not correspond to any point on the other model due to the partial overlap. In fact, existing work for robust non-rigid registration typically identifies such L2 distance as a source of poor performance for partial-to-partial registration and replaces it with other robust norms (e.g., L1) to allow for large distances on some points. It is surprising that using such a term in the training loss would work well for partial-to-partial registration in the current paper (for the unsupervised case at least). See the weaknesses above. It is adequate.",
" This paper proposes a novel method - Neural Deformation Pyramid (NDP) for non-rigid point cloud registration. Non-rigid point cloud registration remains a challenging problem as the input 3D data often contains noise, outliers, and occlusions. The proposed method NDP deals with the above complexity by decomposing motion hierarchically. The NDP method uses coordinate-MLP as the basic structure, similar to existing works Nerfies [30] and SAPE [13]. The NDP outperforms existing methods on the 4DMatch/4DLoMatch [19] dataset and is faster than existing MLP-based methods.\n ## Strengths\n\n- The proposed method NDP is well-motivated, and the connection to existing methods is clearly stated.\n- The proposed method achieves better performance on both no-learned and supervised settings quantitatively and qualitatively than previous methods and is faster than the existing method Nerfies [30]. \n- The authors compared their proposed methods with a variety of existing methods.\n- This paper is well written and easy to follow.\n\n\n## Weaknesses\n- One of the paper's main contributions is introducing a pyramid architecture to the non-rigid motion registration task. However, there is a range of methods that have already discussed pyramid structures, as justified in Related Work (line 68 to 76). Therefore, there is a lack of technical contribution in this paper. Similarly, the proposed framework is based on coordinate-MLP structure, similar to existing methods Nerfies [30] and SAPE [13]. \n- The authors only conducted experiments on a single benchmark 4DMatch/4DLoMatch [19]. The authors should consider conducting experiments on other benchmarks, such as the 3DMatch/3DLoMatch used in previous research [19].\n- In line 25, the authors claim that the proposed method alleviates the complexity of non-rigid registration task (data noise, outliers, and occlusions). However, there are no quantitative experimental results regarding the above complexities. The authors should consider conducting experiments with the proposed method to data noise, outliers, and occlusions.\n - The authors claim that the proposed method is faster than the existing method; however, there is no detailed data about the runtime speed of the proposed method and existing methods. The author should report the speed in detail in their paper.\n- In line 53, what does 'via skinning’ mean here?\n- There are not enough implementation details to reproduce the proposed method, such as the parameters of the MLP structures.\n- In 3.3 (line 174), the authors propose the non-rigid registration algorithm. However, there is not enough justification for the algorithm. Such as why the authors optimize different levels in order instead of updating all levels simultaneously. Also, there is no justification why the max_iter is set to 500 and the theta to 15 iterations. \n The authors have adequately addressed the limitations and potential negative societal impact.\n",
" The paper proposes a non-rigid registration algorithm that also works in the partial-to-partial point clouds setting. The idea is to regularize the deformation in three ways: 1) by dividing the problem into pyramidal (hierarchical) sub-problems; 2) by considering first low-frequency deformations, and scaling to higher frequencies later; 3) by defining for each point a rigid (or affine) deformation (i.e., rotation, translation, and optional scaling). The method is tested on 4DMatch and 4DLoMatch, showing compelling results and outperforming several SOTA methods.\n == STRENGHTS ==\n- The method seems novel in the composition of its elements. I like the proposed regularizations on the different aspects of the problem. The approach seems relatively straightforward, while probably to be implementable by scratch, further details are required (e.g., a deeper explanation of the architecture implementation, fewer pointers to different papers for details)\n- The shown results are convincing on a significant variety of shapes. The attached video gives a nice insight into the low-to-high frequencies deformation.\n\n== WEAKNESSES ==\n- The method is tested on various shapes, but the quantitative results are only on near-isometric deformations of synthetic data. The only experiment between significantly different shapes (Figure 6) is not convincing in terms of structural preservation of the shapes (the left-most example shows that A's chest is mapped to B's belly).\n- No analysis of noise or clutter is provided. This makes unclear the applicability of the method in real-case scenarios. Also, it mentions the possibility of including the scale factor, but no quantitative results are provided in this case. No analysis of different amounts of point clouds partiality or computational scaling at the different number of points.\n\nMinor fixes:\n- line 125: Fig.3 -> Fig. 1\n- line 137: 'registration of of multiple scans'\n- line 168: 'this regularization help to preserve' -> helps\n- [14] is not introduced in the previous works; [a] would be a better alternative than [6] since it overcome some LBO limitations in the point coluds setting\n\n[a]: Correspondence learning via linearly-invariant embedding, Marin et al., 2020\n\n== AFTER REBUTTAL ==\n\nAfter the rebuttal and the discussion phase, I am prone to keep my initial rating and vote for acceptance. For the final version, I suggest including in the main manuscript the feedback about the method limitations. - How much the method is sensitive to the introduction of noise, clutter, and point-cloud density?\n- How does the method computation timing scale at a different number of points? Both societal impact and limitations are fairly discussed."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"HUvm3_PYhH8",
"gsKgJsUNiO8Z",
"nips_2022_pfEIGgDstz0",
"vXqyWKZso6A",
"Zyhj4IcUCl_",
"Vuutvcw8Xmp",
"Vuutvcw8Xmp",
"tw3X98-zvZ",
"nips_2022_pfEIGgDstz0",
"nips_2022_pfEIGgDstz0",
"nips_2022_pfEIGgDstz0",
"nips_2022_pfEIGgDstz0"
] |
nips_2022_YgK1wNnoCWy | Green Hierarchical Vision Transformer for Masked Image Modeling | We present an efficient approach for Masked Image Modeling (MIM) with hierarchical Vision Transformers (ViTs), allowing the hierarchical ViTs to discard masked patches and operate only on the visible ones. Our approach consists of three key designs. First, for window attention, we propose a Group Window Attention scheme following the Divide-and-Conquer strategy. To mitigate the quadratic complexity of the self-attention w.r.t. the number of patches, group attention encourages a uniform partition that visible patches within each local window of arbitrary size can be grouped with equal size, where masked self-attention is then performed within each group. Second, we further improve the grouping strategy via the Dynamic Programming algorithm to minimize the overall computation cost of the attention on the grouped patches. Third, as for the convolution layers, we convert them to the Sparse Convolution that works seamlessly with the sparse data, i.e., the visible patches in MIM. As a result, MIM can now work on most, if not all, hierarchical ViTs in a green and efficient way. For example, we can train the hierarchical ViTs, e.g., Swin Transformer and Twins Transformer, about 2.7$\times$ faster and reduce the GPU memory usage by 70%, while still enjoying competitive performance on ImageNet classification and the superiority on downstream COCO object detection benchmarks. | Accept | After rebuttal and discussion all reviewers recommend acceptance. The AC sees no reason to overturn this recommendation. | test | [
"9knBlOMv5vo",
"qUIjtjDPra",
"fepmyKQ5536",
"gpzBZbAFkZjT",
"FYcMqSoLXOP",
"SXh5eukv0f9",
"XghQFxBTqo",
"ykX2LO-Ow2P",
"4_wUAaZAYds",
"U_R7fPslxvb",
"wqbox9KFCB",
"Nw8YQrjs_Y",
"xgcy7J31I-"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I do not have further questions. ",
" Dear reviewer, thanks again for the careful reviews and constructive suggestions. We have provided additional explanations and experiments to address the concerns accordingly. Since the deadline of the author-reviewer discussion period is approaching soon, we would like to discuss with you whether or not the concerns have been resolved. And if you have any additional comments, we are happy to answer them further.",
" Dear reviewer, thanks again for the careful reviews and constructive suggestions. We have provided additional explanations and experiments to address the concerns accordingly. Since the deadline of the author-reviewer discussion period is approaching soon, we would like to discuss with you whether or not the concerns have been resolved. And if you have any additional comments, we are happy to answer them further.",
" Thanks, I have no more questions.",
" Thanks for the question. The reviewer might have some misunderstandings about our Group Window Attention in Fig. 2, which aims to address the incapability of the local window attention in hierarchical vision transformers to handle the windows of arbitrary size caused by the random masking. We would like to clarify that, because the isotropic ViT backbones use global attention in all of their building blocks, window grouping is not necessary anymore since there is only ONE window. Therefore, our method will perform exactly the same as MAE if we use the isotropic ViT backbones.",
" Thank you for the response. Regarding Q3, I am wondering if your method could outperform the original MAE on top of ViT backbones, that is, replacing MAE's vanilla random masking with your group window masking mechanism (Fig. 2).",
" > Q4. How much time does the DP solver take? How much time does the grouping, masking, or ungrouping take? Some more detailed analysis would be helpful.\n\nA4. Thanks for the comment. We benchmark the time cost (ms) of each component in a single Group Attention module and summarize the results in the table below.\n\n|Time Cost (ms)| DP | Masking | Pre-proc | Grouping | Ungrouping | Attention Fwd&Bwd |\n|:-------------|:---------:|:-------:|:--------------:|:--------:|:----------:|:---------:|\n| Stage1 | 4.9 | 3.8 | 14.8 | 0.4 | 0.4 | 61.9 |\n| Stage2 | 0.6 | 0.1 | 2.1 | 0.2 | 0.2 | 20.8 |\n| Stage3 | 0.1 | 0.1 | 1.0 | 0.1 | 0.1 | 15.4 |\n\nHere we use a tensor of shape $[256, H_i\\times W_i, C_i]$ as input, where $H_i, W_i, C_i$ denote the height, width, and the number of channels of the features in the $i$th stage. Note that the DP, masking, and other pre-processing operations are only executed twice for each stage, i.e., for the shifted/unshifted window partition. We can see that the extra cost of our method is indeed moderate compared with the attention computation.\n\n\n> Q5. Add an accuracy comparison to Fig. 1.\n\nA5. We will follow your suggestion to add another subfigure of accuracy in Fig. 1 for a more intuitive comparison.\n\n> Q6. Why SimMIM is slightly better than the proposed method.\n\nA6. There might be multiple explanations for this question. First, as in the eighth and ninth rows of Table 2 in our manuscript, we can see that SimMIM also slightly outperforms MAE (which is our standpoint) when using the same ViT-Base backbone. Second, unlike the isotropic ViTs that exchange information between all visible patches, the window-based Swin Transformer can only process a local window of patches in a time and, thereby, the intermediate features of the masked patches may act as proxies for information propogation. \n\n> Q7. Scaling to larger models.\n\nA7. Actually, in Appendix A of our supplementary material, we evaluated our method on the larger Swin-L backbone, which has $2\\times$ parameters compared with the Swin-B. The results are summarized below. We can observe that our method still obtains competitive performance on IN-1K with even larger efficiency improvements, i.e., $\\sim 2.7\\times$ speedup and only $\\sim 30$% of GPU memory consumption, highlighting the efficacy of our method. We will put this result into the main body of our paper to emphasize the scaling ability of our method.\n\n| Method | PT Resolution | GPU Hours | GPU Memory | IN-1K Acc. |\n|:-------|:-------------:|:---------:|:------------:|:-----------------:|\n| SimMIM | 192x192 | 2821 | 727.0GB | 85.4 |\n| Ours | 224x224 | 1067 | 233.6GB | 85.1 |\n",
" We thank the reviewer for the valuable comments and respond to them appropriately as follows. We will add suggested experiments and explanations in the updated manuscript.\n\n> Q1. The accuracy on ImageNet is a bit disappointing: Swin+MIM trained for 900 epochs obtains 83.7, while Swin-from-scratch trained for 300 epochs already obtains 83.5. Looking at ImageNet results alone, it's not clear if we need MIM on Swin.\n\nA1. Even if we only look at the Imagenet results, we believe that the performance of MIM with Swin is promising compared with the supervised baseline: \n1. **Longer training schedule does NOT further improve the supervised baseline.** As shown in Table 7 of SimMIM [1], Swin-B with 800 epochs of supervised pretraining and 100 epochs of finetuning achieves only 83.3% accuracy on ImageNet. In contrast, our method achieves 83.8% accuracy under the same setting.\n2. **Our method performs much better than the supervised baseline when using a larger model.** With the Swin-L, our method backbone achieves 85.1% accuracy (see A7 for more details) while the supervised baseline only obtains 83.5% accuracy (as in Table 7 of SimMIM [1]).\n\n\n> Q2. Generality of the proposed method.\n\nA2. Thanks for the suggestion. While we chose the representative _pure_ hierarchical ViT--Swin Transformers--as our standpoint, our method can easily generalize to other _hybrid_ hierarchical ViTs (e.g., ViTs with convolution or pooling layers) with minimal adaptions. Our intuition is that the features of visible patches can be viewed as sparse tensors, upon which the convolution/pooling operation can be performed efficiently with the help of Sparse Convolution [1] (implemented for newer GPUs in [2,3]), originally designed for 3D point cloud data. Here we take the Twins Transformers [4], which contain both window attention and (depthwise) convolution, as a concrete example. We apply our method to the Twins-L model (98M parameters, similar to Swin-B) and replace all the standard convolutions with sparse convolutions. Due to the time limitation of the response period, we are only able to provide the results of 100 pre-training epochs. Yet, we can still observe that our method performs on par with the baseline MAE operating on all patches while enjoying 2.6x pre-training speedup in a greener way.\n\n| Method | PT Resolution | GPU Hours | GPU Memory | IN-1K Acc. |\n|:-------|:-------------:|:---------:|:----------:|:---------:|\n| MAE with ALL patches | 224x224 | 261.3 | 408.3GB | 83.5% |\n|Ours for Twins | 224x224 | 84.6 | 102.3GB | 83.3% |\n\nWe will clarify this part, articulate how our method can generalize to most hierarchical ViTs, and include more experiment results of longer training schedules with various ViTs in the updated manuscript. In addition, to facilitate future research, we will release all the pre-trained models, relevant code, and detailed instructions to pre-train customized hierarchical ViTs with our method.\n\n> Q3. Pretraining with longer training epochs.\n\nA3. We conduct experiment to evaluate our method with longer training epochs and summarize the result in the following table, where we can see that our method with 1600 epochs achieve 83.9% accuracy. Note that, due to the time limitation, we directly use the same parameters as 800-epochs pre-training/fine-tuning, which may be suboptimal compared with SimMIM using optimized hyper-parameters for different schedules. We may expect a further performance boost with better training configurations.\n\n| Method | PT Epochs | GPU Hours | GPU Memory | IN-1K Acc. |\n|:-------|:-------------:|:---------:|:----------:|:---------:|\n| SimMIM$_{192}$ | 800 | 1609 | 284.8GB | 84.0% |\n| SimMIM$_{224}$ | 800 | 2251 | 398.3GB | 84.1% |\n| Ours | 800 | 887 | 121.6GB | 83.8% |\n| Ours | 1600| 1774 | 121.6GB | 83.9% |\n\n*Note that the fine-tuning performance of our model is slightly boosted by 0.1% compared with the reported number in the paper, following SimMIM to use a fine-tuning batch size of 2048.\n\n[1] SimMIM: a Simple Framework for Masked Image Modeling. Xie et al., CVPR2022.\n\n[2] Graham et al., _3D Semantic Segmentation with Submanifold Sparse Convolutional Networks_. CVPR'2018.\n\n[3] Choy et al., _4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks_. CVPR'2019.\n\n[4] https://github.com/traveller59/spconv (Apache-2.0 License).\n\n[5] Chu et al., _Twins: Revisiting the Design of Spatial Attention in Vision Transformers_. NeurIPS'2021.",
" We thank the reviewer for the valuable comments and respond to them appropriately as follows. We will add suggested experiments and explanations in the updated manuscript.\n\n> Q1. Performance improvement.\n\nA1. We respectfully disagree that the performance improvement is marginal over the supervised baseline: \n1. **Longer training schedule does NOT necessarily improve the supervised baseline.** As shown in Table 7 of SimMIM [1], Swin-B with 800 epochs of supervised pretraining and 100 epochs of finetuning achieves only 83.3% accuracy on ImageNet.\n2. **Our method performs much better than the supervised baseline when using a larger model.** With the Swin-L, our method achieves 85.1% accuracy (see A5 for more details) while the supervised baseline only obtains 83.5% accuracy (as in Table 7 of SimMIM [1]).\n3. **Our method performs much better on transfer learning than the supervised baseline.** As displayed in Table 3 of our paper, when transferred to MS-COCO object detection, the model pre-trained with our method substantially outperforms the one with supervised pre-trained by 1.6% mIoU.\n\n> Q2. Why not compare with SimMIM$_{224}$?\n\nA2. We were unable to directly compare with SimMIM_{224} in terms of performance because the original paper did not provide any results of SimMIM_{224} with Swin Transformer. Here we train a Swin-B model with SimMIM_{224} with the official code and make comparisons with our method in the following table. We can see that SimMIM_{224} performs similarly to our method, but it invloves a sharp increase on training cost.\n\n| Method | PT Resolution | GPU Hours | GPU Memory | IN-1K Acc. |\n|:-------|:-------------:|:---------:|:----------:|:---------:|\n| SimMIM | 192x192 | 1609 | 284.8GB | 84.0% |\n| SimMIM | 224x224 | 2251 | 398.3GB | 84.1% |\n| Ours | 224x224 | 887 | 121.6GB | 83.8%* |\n\n*Note that the fine-tuning performance of our model is slightly boosted by 0.1% compared with the reported number in the paper, following SimMIM to use a fine-tuning batch size of 2048.\n\n> Q3. I think it is important to implement this method on the vanilla MAE with ViT-Base to show the performance gaps caused by group window attention.\n\nA3. We note that the proposed method would be the same as MAE when using the isotropic ViT-Base model. The motivation of our method is to alleviate the drawback of MAE (e.g. weak performance for dense predictions) and generalize it to hierarchical ViTs.\n\n> Q4. Pre-training with longer training epochs.\n\nA4. We conduct experiment to evaluate our method with longer training epochs and summarize the result in the following table, where we can see that our method with 1600 epochs achieve 83.9% accuracy. Note that, due to the time limitation, we directly use the same parameters as 800-epochs pre-training/fine-tuning, which may be suboptimal compared with SimMIM using optimized hyper-parameters for different schedules. We may expect a further performance boost with better training configurations.\n\n| Method | PT Epochs | GPU Hours | GPU Memory | IN-1K Acc. |\n|:-------|:-------------:|:---------:|:----------:|:---------:|\n| SimMIM$_{192}$ | 800 | 1609 | 284.8GB | 84.0% |\n| SimMIM$_{224}$ | 800 | 2251 | 398.3GB | 84.1% |\n| Ours | 800 | 887 | 121.6GB | 83.8% |\n| Ours | 1600| 1774 | 121.6GB | 83.9% |\n\n\n> Q5. Scaling to other model size.\n\nA5. We actually have evaluated our method on the larger Swin-L backbone in supplementary materials. The results are summarized below (and also in Table 4 of our supplementary material). We can observe that our method still obtains competitive performance on IN-1K with even larger efficiency improvements, i.e., $\\sim 2.7\\times$ speedup and only $\\sim 30$% of GPU memory consumption, highlighting the efficacy of our method.\n\n| Method | PT Resolution | GPU Hours | GPU Memory | IN-1K Acc. |\n|:-------|:-------------:|:---------:|:------------:|:-----------------:|\n| SimMIM | 192x192 | 2821 | 727.0GB | 85.4% |\n| Ours | 224x224 | 1067 | 233.6GB | 85.1% |\n\n\n\n[1] SimMIM: a Simple Framework for Masked Image Modeling. Xie et al., CVPR2022.",
" We thank the reviewer for the valuable comments and respond to them appropriately as follows. We will add suggested experiments and explanations in the updated manuscript.\n\n> Q1. The paper claims that the proposed approach is for hierarchical ViTs, but the experiments are carried out only for Swin Transformer. It would be more convincing if the authors could show the results for at least another ViT. Or, the authors should consider not to claim the proposed approach to be generally applicable to hierarchical ViTs.\n\nA1. Thanks for the suggestion. While we chose the representative _pure_ hierarchical ViT--Swin Transformers--as our standpoint, our method can easily generalize to other _hybrid_ hierarchical ViTs (e.g., ViTs with convolution or pooling layers) with minimal adaptions. Our intuition is that the features of visible patches can be viewed as sparse tensors, upon which the convolution/pooling operation can be performed efficiently with the help of Sparse Convolution [1] (implemented for the newer GPUs in [2,3]), originally designed for 3D point cloud data. Here we take the Twins Transformers [4], which contain both window attention and (depthwise) convolution, as a concrete example. We apply our method to the Twins-L model (98M parameters, similar to Swin-B) and replace all the standard convolutions with sparse convolutions. Due to the time limitation of the response period, we are only able to provide the results of 100 pre-training epochs. Yet, we can still observe that our method performs on par with the baseline MAE operating on all patches while enjoying 2.6x pre-training speedup in a greener way.\n\n| Method | PT Resolution | GPU Hours | GPU Memory | IN-1K Acc. |\n|:-------|:-------------:|:---------:|:----------:|:---------:|\n| MAE with ALL patches | 224x224 | 261.3 | 408.3GB | 83.5% |\n|Ours for Twins | 224x224 | 84.6 | 102.3GB | 83.3% |\n\nWe will clarify this part, articulate how our method can generalize to the most hierarchical ViTs, and include more experiment results of longer training schedules for various ViTs in the updated manuscript. In addition, to facilitate future research, we will release all the pre-trained models, relevant code, and detailed instructions to pre-train customized hierarchical ViTs with our method.\n\n\n> Q2. The sole optimization target is the computation cost, leaving the pretrain quality completely unconsidered. Does the selection of group size affect the pretrain quality?\n\nA2. Because we adopt the Mask Attention scheme (as in Sec. 3.4 and Fig. 3) that ensures the tokens from different local windows have NO interaction, the choice of group size has no impact on the pretrain quality. As a result, we can focus only on optimizing the training efficiency.\n\n> Q3. The authors use dynamic programming to solve the group partition problem, but does a simple and heuristic algorithm, such as $g_s = \\max~{w_i}$, already suffices? The complexity of the dynamic programming algorithm is not well justified/ablated.\n\nA3. Following the setting in Fig. 4, we compare the complexity of a single group attention module, w/ or w/o dynamic programming, at each stage as below. We can see that, when $g_s = \\max_i w_i$, the complexity is doubled without the DP solver. Simply setting $g_s = \\sum_i w_i$ (such that there is only 1 group) also suffers from heavy cost, encountering an out-of-memory error in practice even with a much smaller batch size (e.g., 64 per GPU). In contrast, with the DP solver, the complexity is significantly reduced even when we simply fix the group size to the same as the window size $p\\times p$ (note that $\\max_i w_i = p\\times p$ with a high probability in practice) as discussed in L272 of our original submission. This experiment demonstrates the efficacy of our optimal grouping scheme.\n\n| Group size $g_s$| Dynamic Programming | FLOPs@Stage1 | FLOPs@Stage2 | FLOPs@Stage3 |\n|:-----------|:-------------------:|:------------:|:-----------------:|:------------:|\n| Greedy (Alg. 1)| $\\checkmark$ | **62.6M** | **55.4M** | **52.3M** |\n| $p \\times p$ (i.e., 49) | $\\checkmark$ | 65.0M | 58.5M | 52.7M |\n| $\\max_i w_i$| $\\checkmark$ | 64.5M | 59.7M | 62.5M |\n| $\\max_i w_i$ | $\\times$ | 137.5M | 113.3M | 75.7M |\n| $\\sum_i~w_i$ | $\\times$ | 201.3M | 69.2M | 52.7M |\n\n\n[1] Graham et al., _3D Semantic Segmentation with Submanifold Sparse Convolutional Networks_. CVPR'2018.\n\n[2] Choy et al., _4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks_. CVPR'2019.\n\n[3] https://github.com/traveller59/spconv (Apache-2.0 License).\n\n[4] Chu et al., _Twins: Revisiting the Design of Spatial Attention in Vision Transformers_. NeurIPS'2021.",
" This paper presents a cost-effective approach for masked image modeling (MIM) pretraining with hierarchical vision transformers. MIM has been proved effective for vision transformer pretraining, but hierarchical ViT faces a challenge in MIM pretraining that both visible and masked patches have to be involved, which greatly decreases the pretraining efficiency. This paper presents a solution which reduces the pretraining cost of Swin Transformer by around half while achieving on par performance. This paper addresses a relevant problem with a technically sound solution. The cost of MIM pretraining for hierarchical ViTs is indeed a painpoint in vision backbone learning. The proposed group window attention scheme is reasonable, and the evaluation on Swin Transformer is c. \n\nThe paper claims that the proposed approach is for hierarchical ViTs, but the experiments are carried out only for Swin Transformer. It would be more convincing if the authors could show the results for at least another ViT. Or, the authors should consider not to claim the proposed approach to be generally applicable to hierarchical ViTs. \n\nThe group window attention algorithm solves an optimization problem to determine the group size and the number of groups. The sole optimization target is the computation cost, leaving the pretrain quality completely unconsidered. Does the selection of group size affect the pretrain quality?\n\nThe authors use dynamic programming to solve the group partition problem, but does a simple and heuristic algorithm, such as g_s=max{w_i}, already suffices? The complexity of the dynamic programming algorithm is not well justified/ablated. Please refer to the previous section. No potential negative societal impact is identified. ",
" The paper first introduces to properly apply the MAE pre-training method on Swin Transformer backbone architecture. To adapt the visible-only encoder with local window attention in Swin, the paper proposes a new Group Window attention mechanism with Dynamic Programming to automatically tune the group size and local windows. The introduced method significantly reduces the pre-training hours of SimMIM on Swin Transformer with marginal performance drops. Strengths:\n+ Writing: The paper is generally well written and easy to follow. The story of green AI is interesting.\n+ Contribution: It is significant to well apply the MAE-style pretraining method on hierarchical Vision Transformer architectures, especially benefitting dense prediction downstream tasks, e.g., detection and segmentation. It is the first work to investigate this problem so far as I know.\n\nWeaknesses:\n\nI am generally satisfied with this work but have several concerns about the experiments:\n1. **Marginal improvements:** The improvements over training from scratch on ImageNet is marginal, i.e., 83.5 vs. 83.7. The pretraining hours are comparable (840 for training from scratch and 887 for this work), it may raise a question that if 83.7 can also be achieved by training from scratch for 887 hours. By the way, why not compare to SimMIM_224 on Swin-Base?\n2. **Lack of ablation studies:** (i) I think it is important to implement this method on the vanilla MAE with ViT-Base to show the performance gaps caused by group window attention. (ii) I would like to see if the performance can be further improved with more training epochs. For example, for 800 epochs, SimMIM_192 takes double the training hours than this work but +0.3% performance gains. I want to know if this method can also achieve 84.0% with double training hours (800ep -> 1600ep).\n3. **More model scales:** It is better to show the power of this method on other sizes of Swin Transformer, e.g., small, large, etc.\n\n\n I like the motivation and story of this work but would like to see more experiments that would make it more solid. Please see Weaknesses for details. The authors discussed the limitation of the batch-wise masking scheme in the Appendix.",
" In this paper, the authors propose a method to perform MAE style pre-training on Swin transformers. The main idea is to dynamically put the visible patches into groups and then perform attention within each group. To ensure that information doesn't \"leak\" out of each window, attention masking is performed in each group. This method can be efficient because of the Swin structure -- attention can only happen within a window, so the size of each group can be small and well-packed. The grouping is solved by dynamic programming to optimize FLOPs. The authors evaluate this approach on ImageNet and COCO and show that this method is efficient while being competitive in accuracy. Strengths:\n+ The 2x speedup looks promising and useful. Given the wide use of Swin transformers, an efficient way to pre-train with SSL can be quite impactful. \n+ The idea to perform grouping for fast MAE on Swin is interesting and novel. \n+ The idea to optimize grouping based on FLOPs with DP is simple but seems to work well. \n+ The accuracy on COCO suggests MIM+Swin is likely a good idea. \n\n\nWeaknesses:\n- The accuracy on ImageNet is a bit disappointing: Swin+MIM trained for 900 epochs obtains 83.7, while Swin-from-scratch trained for 300 epochs already obtains 83.5. Looking at ImageNet results alone, it's not clear if we need MIM on Swin.\n- The title and writing overall suggests that the method works on general hierarchical ViT models, but in reality, the authors only show that the method works on Swin transformers. It's not just lack of experiments, it's not even clear how one can apply this approach to other hierarchical models (e.g., MViT). \n- The experiments are limited to <=800 epoch training. Given that MIM is usually more useful in long-training regime, missing >800 epoch experiments make it harder to compare to MAE. \n- How much time does the DP solver take? How much time does the grouping, masking, or ungrouping take? Some more detailed analysis would be helpful. \n- (minor:) In figure 1, without accuracy, it's hard to appreciate the method by speed only.\n 1. I wonder if the authors have any intuition on why the accuracy of SimMIM is slightly higher than the proposed method.\n2. I wonder if MIM on Swin works well with a larger backbone (e.g., Swin-L, Swin-H, etc.). I know that this might not be the main focus of this paper, but if MIM doesn't look promising on Swin, then speeding this up is less important. Conversely, if MIM on large Swin models works very well, then the importance of this work will be large. I think the main limitation is that the method focuses on Swin Transformers only, but not other hierarchical models."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"fepmyKQ5536",
"xgcy7J31I-",
"wqbox9KFCB",
"FYcMqSoLXOP",
"SXh5eukv0f9",
"4_wUAaZAYds",
"xgcy7J31I-",
"xgcy7J31I-",
"Nw8YQrjs_Y",
"wqbox9KFCB",
"nips_2022_YgK1wNnoCWy",
"nips_2022_YgK1wNnoCWy",
"nips_2022_YgK1wNnoCWy"
] |
nips_2022_L8ESR8IQ7Gb | Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models | Transfer learning aims to leverage knowledge from pre-trained models to benefit the target task. Prior transfer learning work mainly transfers from a single model. However, with the emergence of deep models pre-trained from different resources, model hubs consisting of diverse models with various architectures, pre-trained datasets and learning paradigms are available. Directly applying single-model transfer learning methods to each model wastes the abundant knowledge of the model hub and suffers from high computational cost. In this paper, we propose a Hub-Pathway framework to enable knowledge transfer from a model hub. The framework generates data-dependent pathway weights, based on which we assign the pathway routes at the input level to decide which pre-trained models are activated and passed through, and then set the pathway aggregation at the output level to aggregate the knowledge from different models to make predictions. The proposed framework can be trained end-to-end with the target task-specific loss, where it learns to explore better pathway configurations and exploit the knowledge in pre-trained models for each target datum. We utilize a noisy pathway generator and design an exploration loss to further explore different pathways throughout the model hub. To fully exploit the knowledge in pre-trained models, each model is further trained by specific data that activate it, which ensures its performance and enhances knowledge transfer. Experiment results on computer vision and reinforcement learning tasks demonstrate that the proposed Hub-Pathway framework achieves the state-of-the-art performance for model hub transfer learning. | Accept | The submission introduces an approach called Hub-Pathway to leverage a diverse collection of pre-trained models for transfer learning. Hub-Pathway trains a pathway generator network to route examples to various models in a data-dependent manner and aggregates the outputs to produce task-specific predictions. Noise is added to the pathway generator and its output is entropy-regularized to encourage exploration, and the activated models are also individually trained on the target loss to encourage exploitation. The approach is evaluated on several image classification, facial landmark detection, and reinforcement learning tasks.
Reviewers noted the paper's clarity and writing quality and found the empirical evaluation extensive and convincing. On the other hand, they expressed doubts regarding Hub-Pathway's computational and memory complexity at training and inference time, in particular in comparison to the alternative of using an ensemble of models.
The authors responded by citing existing results in the submission and providing new results in the revised appendix showing that Hub-Pathway does in fact have lower computational complexity than an ensemble. They also argued through additional results that the forward propagation through multiple models (as opposed to holding the parameters of multiple models in memory) is the main memory bottleneck (which model ensembles also face), and that Hub-Pathway does better in that regard than model ensembles due to the pathway activation mechanism.
Overall the authors' response was satisfying to the reviewers, and their consensus is that the submission should be accepted. I therefore recommend acceptance. | train | [
"CxH3ftHiiby",
"CrCDkj8cjGL",
"L7GpOjjRwrj",
"IfLd_E8SpNo",
"LLYwpOcUXL9",
"B7P8nHXMP2G",
"hBeNdgGU0Jj",
"zfPPXbcvjjC",
"c-d617Lvv9N",
"WU568XNnBgc",
"QX_gju7Utx8",
"g3h89MCecwz",
"szBTdYCgwSq",
"k00HPMBPmOL",
"ha85E6HloL",
"UZhDxIUX-sZ",
"3Ww_gLjpWmt",
"AlBVBHHhzld",
"NOCCB3QKliGK",
"eo85r3dq0vJv",
"gwr53nFE0St",
"iaeZyFSkq-U",
"sXXkI7jCY-o",
"fbXyWBj6rt9",
"X0GuEKBzoOC"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for providing inspiring comments and timely responses. ",
" Dear Authors,\n\nThank you for responding to the questions. This work is really interesting, and I, therefore, stand by my (accepting) recommendation.",
" Many thanks for your efforts in reviewing our paper and responses, and your comments to help improve our paper. We add the results in the rebuttal revision and they will be included in our paper.",
" Dear Reviewer 6QEr,\n\nThank you very much for your time and efforts in reviewing our paper.\n\nWe want to kindly remind you that the author-reviewer discussion period will end in a few days. We have made an extensive effort to try to successfully address your concerns, by clarifying the complexity of our method, providing more analysis and experimental results, and making revisions to the paper and appendix.\n\nIf you have any further concerns or questions, please do not hesitate to let us know, and we will respond to them timely.\n\nAll the best, \n\nAuthors",
" We sincerely thank Reviewer 8MEA for providing detailed and insightful reviews, which helped us strengthen our paper by providing more analysis and clarifying the strengths and limitations. Also, many thanks for the timely responses and the careful judgment of our feedbacks. \n",
" Thanks for the response and for editing the paper to address concerns.\n\nScore has been updated from 4->6",
" Thanks for your timely comments again.\n\nIn such cases, when data activate different models and the whole model hub are activated by the batch, we load all models in the memory, forward the data to the specific models activated by them, and train the activated models together. \n\n| Method | Total | Load Models | Generator Output | Model Output |\n| ------ | ------ | ------ | ------ | ------ |\n| Single | 4217 | 897 | / | +2692 |\n| Hub-Pathway | 5085 | 1203 | +598 | +2694 | 5085 |\n\nWith batch-size=32, $k$=1, we report the memory cost (M) during training, where each data point activates one model, and all models are activated in the batch. We report the total memory cost and the memory cost after loading the model parameters. We also report the increment in memory costs after forwarding data to the pathway generator and pre-trained models to get output tensors. We make the following clarifications:\n\n- The memory costs are mainly composed of two parts: (1) loading and storing models in the memory and (2) forwarding data to models to get output tensors. \n- (1) is determined by the number of activated models in the batch (which is 5 in this experiment) since we load all activated models in the memory to avoid swapping models.\n- (2) is controlled by the number of models each data point forwards, i.e., the top-k selection hyper-parameter $k$ (which is 1 in this experiment) since the tensors are only created when a data point is forwarded through a model. \n\nFrom the second column, we can find that the additional cost of (1) in Hub-Pathway is small (about 300 M) compared with the cost of (2) (about 2600 M). From the last column, we can find that with $k$=1, Hub-Pathway has a similar cost of (2) compared with one single model, which justifies our clarification above.\n\nThis motivates us to spend a few more costs storing all the models to avoid the costs of swapping models in and out of the memory. Because of this, our method has a complexity of about $O(k)$, which can be selected and controlled, while Ensemble has a complexity of about $O(m)$ ($m$ is the size of the model hub), making our method more efficient than Ensemble. \n\nAs you have pointed out, it is a limitation of our work that our method introduces more costs than fine-tuning from a single model. We mainly discuss the situation of common pre-trained models where the cost of storing model parameters is not high compared with output tensors. Inspired by your comments, there may also be cases with large pre-trained models, where the costs of model parameters are higher than the output tensors. In such situations, loading all models in the memory is prohibitive, and we could seek to swap models in the memory to sacrifice time for space. We have updated these discussions of limitations in $\\underline{\\text{Section C of the revised appendix}}$. Thanks again for helping and guiding us to clarify these. We think these limitations do not hurt the core contribution of our method since it is still efficient and effective in common cases.",
" Thanks again for your response : \nI still have an issue with this statement - \n\n**As we discussed in Question 1 , the additional memory cost of using multiple models mainly lies in the latter one, which can be controlled by the top-k pathway activation mechanism of our method. If we do not have enough memory, we can choose a smaller k for each data point.**\n\nCan you clarify the following for me : \n1. Assume k = 1. And assume that we have a batch size of data > the number of models so say bsz=32 and we have 5 models. \nEarly in training, for the 32 samples in the batch size - it is very unlikely that those 32 samples will activate **the same model**. They will each activate a single model, but taken together would activate all **5 models**. What happens in this case ? \nWould you not have to either :\n\na) swap models in and out of training\n \nb) for that batch, only update a subset of the models on each backward and forward pass - which means that you would be using a smaller effective batch size - and thus increasing your training time cost ?\n\nI think understanding this, and addressing this in the paper as a possible limitation, would strengthen the work. \n",
" Continue with Q2\n\nWe admit that there are some extreme cases where the resource cannot afford to store or forward even one more model, in which case our method may not work. We thank the reviewer for pointing out this limitation, and we have added this discussion into the $\\underline{\\text{Section C of the revised appendix}}$. We think this may not hurt the core contribution of our method because each method may have a scope of its application. For example, there are works designing deeper and bigger models for better performance and works designing lightweight models to sacrifice some performance for efficiency, both of which have their own values. In model hub transfer learning, we also need to trade-off between performance and efficiency, and from our reported results, our proposed method achieves a good balance between them. Of course, inspired by your comments, an ideal solution may achieve performance gain with almost no efficiency losses, which is a challenging goal and encourages our future exploration of this problem.\n\n\n**Q3:** Comparison with two ensembling variants.\n\nWe compare the performance and complexity of Hub-Pathway with fine-tuning from one single model and the two variants of Ensemble proposed by the reviewer in the table above. Hub-Pathway outperforms the two variants of Ensemble, which shows the effectiveness of the proposed method. The training and testing costs of Ensemble-Joint all grow with the number of the models, making it inefficient for the problem. Ensemble-Independent saves the training memory by fine-tuning one model each time, but the computational costs during training and the costs during inference cannot be saved. Ensemble-Independent also takes a longer time for training than Ensemble-Joint, since some additional processes such as data processing and augmentation should be repeated for each model. Hub-Pathway also introduces additional costs than the single model baseline, but the additional costs can be controlled by pathway activation and thus do not scale up with the size of the hub. It also outperforms the two variants of Ensemble on most of the complexity metrics reported above. In all, Hub-Pathway achieves a better balance between the performance gain and the efficiency for the model hub transfer learning problem.",
" Thank you for providing timely comments and giving us the chance to further clarify our methods and results.\n\nWe report the performance and complexity of four different methods during the training and testing time to help clarify the following questions. The memory cost metrics include the additional memory costs of loading all models than one model (Additional Model Memory) and the total memory during training (Train Memory) and testing (Test Memory). The computational cost metrics include the training and testing speed. We explore four different methods, fine-tuning one single pre-trained model (Single), fine-tuning the ensemble of 5 pre-trained models jointly and testing with their ensemble (Ensemble-Joint), fine-tuning 5 pre-trained models independently and testing with their ensemble (Ensemble-Independently), and Hub-Pathway. We have also updated the results and discussion in $\\underline{\\text{Section A.3 of the revised appendix}}$.\n\n| Method | Acc (\\%) $\\uparrow$ | Additional Model Memory (M) $\\downarrow$| Train Memory (M) $\\downarrow$| Test Memory (M) $\\downarrow$| Train Speed (iters/s) $\\uparrow$| Test Speed (samples/s) $\\uparrow$|\n| ------ | ------ | ------ | ------ |------ | ------ | ------ |\n| Single | 83.41 | / | 2149 | 1905 | 10.87 | 484.92\n| Ensemble-Joint | 83.87 | 282 | 7497 | 6397 | 2.32 | 98.64\n| Ensemble-Independent | 84.50 | / | 2149 | 6397 | 10.87 / 5 = 2.17 | 98.64\n| Hub-Pathway | 85.63 | 306 | 4219 | 3537 | 4.68 | 240.48\n\n**Q1:** The computational and memory costs of different methods during the training time.\n\nFrom the table above, we can find that Hub-Pathway clearly introduces fewer computational and memory costs than Ensemble, with the costs being controlled at about 2 times compared with fine-tuning from one single model, while the costs of the ensemble methods scale up with the number of pre-trained models. Note that although Ensemble-Independent saves the training memory by fine-tuning one model each time, the total computational costs increase. Ensemble-Independent even takes longer time for training than Ensemble-Joint since the data processing and augmentation steps should be repeated for each model. It is true that all models can be activated, and thus we need to load all the models into the memory. However, from the table above, we can find that the additional memory costs of loading all models are relatively small compared with the total training memory costs. The main computational and memory costs come from forwarding data through the models, which are controlled by Hub-Pathway with the mechanism of data-dependent pathways to only activate top-k models for each data point (From the results in our paper, we find a small k can achieve good performance). In all, the additional computational and memory costs of training Hub-Pathway do not scale with the size of the model hub, which is also controlled and acceptable.\n\n**Q2:** What if we do not have enough memory to fit all models?\n\nThe memory cost mainly consists of two parts: (1) storing the models in the memory and (2) forwarding data through the models to get outputs. As we discussed in $\\underline{\\text{Question 1}}$, the additional memory cost of using multiple models mainly lies in the latter one, which can be controlled by the top-k pathway activation mechanism of our method. If we do not have enough memory, we can choose a smaller k for each data point. As shown in $\\underline{\\text{Figure 3(c) of the paper}}$, although sometimes activating all models may get the best performance, we can already get good performance with a small number of k, which balances the efficiency and accuracy. Although Ensemble-Independent may reduce the training memory cost by training one model each time, Hub-Pathway is more memory-efficient during inference. And in most cases, the training environment may have more resources than the inference environment. Also, as shown in the table above, the additional memory cost of Hub-Pathway with a small k is not much, which is acceptable in most cases.",
" Thank you for the detailed rebuttal and the attempts to address my concerns. \nA few more clarifying questions : \n1. It seems all these figures you are reporting are for **inference** time ? My question below still stands for training time (especially early in training when the distribution over networks will be roughly uniform)\n\n*it stands to reason that a batch of data could activate all models (though each data-point activates an individual subset but the union could be the whole hub) which presents a computational / memory hurdle to handle efficiently*\n\n2. Your response also assumes you have enough memory to fit all the models. What happens if you do not have enough memory to fit all the available models ? How does your method compare to the following ensembling variants :\n 1. ensemble all models by joint fine-tuning ?\n 2. ensemble all models at test time but fine-tune models independently at train time ?\n\nNote that empirically (b) has been found to lead to better performance than (a) [1] and so one would usually do (b)\n\n[1] https://www.microsoft.com/en-us/research/blog/three-mysteries-in-deep-learning-ensemble-knowledge-distillation-and-self-distillation/",
" We would like to sincerely thank Reviewer AErg for acknowledging our contributions and providing detailed and insightful reviews, which inspire us to strengthen our motivation for data-dependent pathways and explore the related work of ModelSoups. Also, many thanks for the careful judgment of our feedback and the timely responses. \n",
" **Q1**: The need of the data-dependent pathway.\n\n> We have added this experiment to Table 5 of the revised appendix\n\nWe found that. Thank you.\n\n**Q2**: Clarification on ModelSoups-Same and ModelSoups-Different.\n\nThank you for the detailed explanations. Your explanation of the ModelSoups-Different setup makes sense to us. Since the parameter averaging of ModelSoup implicitly assumes that the models have the loss surfaces for each other, so it is convincing that it wouldn't work when averaging models pre-trained on different datasets. \n\nFinally, we would raise our score because all of our concerns were addressed by the author's responses and we have increased confidence that this paper should be presented at the conference.",
" Thank you for the comments.\n\n**Q1:** The need of the data-dependent pathway.\n\nWe have added this experiment to $\\underline{\\text{Table 5 of the revised appendix}}$ to strengthen the motivation for selecting data-dependent pathways.\n\n\n**Q2:** Clarification on ModelSoups-Same and ModelSoups-Different.\n\nBoth ModelSoups-Same and ModelSoups-Different are performed on models with the same architecture. In ModelSoups-Different, 'different' means that the models are pre-trained from different datasets or tasks. \n\nTo be specific, in ModelSoups-Same, we fine-tune from the same ImageNet pre-trained model (with the architecture ResNet-50) multiple times with different hyper-parameters to derive multiple fine-tuned models. Then we average the parameters of the top 2 models among them to derive the results of ModelSoups-Same. This is the protocol of how ModelSoups is used in their original paper. \n\nSince in our paper, we consider transfer learning from a hub of models pre-trained from different sources, so we extend the idea of parameter averaging and explore a variant of ModelSoups-Different. In ModelSoups-Different, we fine-tune from the five different pre-trained models: ImageNet, MoCo, MaskRCNN, DeepLab, Keypoint in $\\underline{\\text{Table 1}}$ (all with the architecture ResNet-50), to derive multiple fine-tuned models. Then we average the parameters of the top 2 models among them to derive the results of ModelSoups-Different. \n\nFrom the reported results, we can find that ModelSoups is a good fine-tuning strategy to promote performance over vanilla fine-tuning. However, it may not be directly used for transfer learning with models from different sources since ModelSoups-Different does not achieve good performance. We think the reason is that although with the same architecture, models fine-tuned from different pre-trained models may fall into diverse local minimals, which is unsuitable for direct parameter averaging.",
" ## Reply to the response for Q1\nThank you for the detailed response supported by the clear results. I think the explanation and experiment further strengthen the justification for selecting paths data-dependently. This concern is addressed by your response. Please add this experiment to the paper (or appendix) as well.\n\n## Reply to the response for Q2\nWe thank you for presenting the results of the comparative experiments on ModelSoup despite the short period. We have a question about ModelSoup-Different, how do you average weights with different architectures? Our understanding is that ModelSoup can only be naively applied in a homogeneous setting (i.e., ModelSoups-Same) with a shared architecture.\n\n## Reply to the response for Q3\nThank you for the additional results and explanations. This concern is addressed by your response.",
" Many Thanks to Reviewer AErg for providing a detailed review and insightful questions. \n\n**Q1:** The need of the data-dependent pathway.\n\nSwitching models in the dataset or task level cannot fully use the knowledge in the model hub since different target data may have different relations to pre-trained models even in the same dataset. As shown in the table below, we report the prediction performance of the best model for each dataset (Best Single), the ensemble of the best k models for the task (Ensemble Top-k, k=2), and using the best model for each data point (Oracle). Note that here both Best Single and Ensemble Top-k choose models at task-level and we use the same model(s) to predict all the data in the task. While for Oracle, we predict each data point with the oracle model performing the best on this data point (the oracle model is chosen by comparing the model outputs with the true label of each data point), which is a more fine-grained instance-level selection. We observe that Oracle outperforms both Ensemble Top-k and Best Single with a large margin, which indicates that the instance-level model selection is important and the data-dependent pathway is necessary. Also, we observe that Hub-Pathway outperforms Ensemble Top-k, which shows that Hub-Pathway is a good attempt to explore data-dependent pathways.\n\n| Method | CIFAR | COCO | Aircraft | Cars | Indoors | DMLab | EuroSAT | Avg. |\n| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\n| Best Single | 81.18 | 81.97 | 84.97 | 89.38 | 73.69 | 75.06 | 98.82 | 83.58 |\n| Ensemble Top-k | 82.85 | 83.24 | 85.55 | 90.05 | 74.16 | 75.48 | 98.83 | 84.31 |\n| Hub-Pathway | 83.31 | 84.36 | 87.52 | 91.72 | 76.91 | 76.47 | 99.12 | 85.63 |\n| Oracle | 88.52 | 89.17 | 92.84 | 94.86 | 83.91 | 84.71 | 99.37 | 90.48 |\n\n**Q2:** The performance of ModelSoups.\n\nModelSoup averages the weights of different fine-tuned models. We conduct experiments on two different ways of creating the model soup. The first way is ModelSoups-Same, where we fine-tune from ImageNet pre-trained model multiple times with different hyper-parameter settings and select the top 2 models to average as the model soup. The second way is ModelSoups-Different, where we fine-tune from five different pre-trained models shown in Table 1 and select the top 2 models to average as the model soup. We report the results in the table below. We observe that Hub-Pathway outperforms both model soup configurations. ModelSoups-Different fails, and we think it could be because models fine-tuned from different pre-trained models may fall into diverse local minimals, which is not suitable for parameter averaging. We add the ModelSoups-Same results in $\\underline{\\text{Table 1 of the revised paper}}$.\n\n| Method | CIFAR | COCO | Aircraft | Cars | Indoors | DMLab | EuroSAT | Avg. |\n| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\n| ModelSoups-Same | 81.32 | 82.94 | 85.24 | 90.32 | 75.61 | 74.29 | 98.65 | 84.05 |\n| ModelSoups-Different | 70.27 | 79.43 | 64.90 | 68.51 | 66.42 | 64.72 | 94.67 | 72.70 |\n| Hub-Pathway | 83.31 | 84.36 | 87.52 | 91.72 | 76.91 | 76.47 | 99.12 | 85.63 |\n\n**Q3:** The experimental settings of the image classification task.\n\nWe plot the training curve for the Cars task in the image classification dataset in $\\underline{\\text{Figure 2(b) of the revised appendix}}$. We compare the convergence speed of Hub-Pathway and a single ImageNet model. We observe that in the first stage, Hub-Pathway converges a little slower because of pathway finding, but then it converges with a similar speed to the single ImageNet model. As stated in the implementation details, Hub-Pathway and competitors are trained for the same iterations.",
" **Q5:** Are all the inputs similarly structured?\n\nAll the models accept the same input structure, which means that all the pre-trained models should accept the same raw input, e.g., all models in Table 1 and 2 accept the image input. However, different pre-trained models can have different data preprocessing steps, which are included in the pre-trained models. For example, for a ResNet, we need to resize and crop the image, while for vision Transformers, we need to cut the image into patches. \n\n**Q6:** Why use a randomized sub-network.\n\nTemperature annealing can only control the smoothness of the pathway weight distribution but cannot create randomness to alternate the activated models. We need to explore the model hub and learn a picky pathway weight to activate the most suitable model, and the random subnetwork satisfies our requirement. Also, the random subnetwork is lightweight without much extra cost. \n\n**Q7:** The performance of a domain expert baseline.\n\nWe explore a domain expert baseline, where for each task, we select the model with the best performance on this task. We have updated the tables to include the domain expert results in the $\\underline{\\text{revised paper}}$. The results of Table 1 are shown in the table below. We observe that the Hub_Pathway outperforms the domain expert baseline, indicating the importance of transferring knowledge from multiple models. \n\n| Method | CIFAR | COCO | Aircraft | Cars | Indoors | DMLab | EuroSAT | Avg. |\n| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\n| Domian Expert | 81.18 | 81.97 | 84.97 | 89.38 | 73.69 | 75.06 | 98.82 | 83.58 |\n| Hub-Pathway | 83.31 | 84.36 | 87.52 | 91.72 | 76.91 | 76.47 | 99.12 | 85.63 |\n\n**Q8:** Comparison with the state-of-the-art methods on RL experiments.\n\nFor the reinforcement learning experiments, we use the same model architectures for all the pre-trained models, which is exactly the problem setting of Zoo-Tuning, and thus, Zoo-Tuning performs well on this setting. Our method has a comparable performance with Zoo-Tuning and slightly outperforms it in the Gopher task. Compared with Zoo-Tuning, our method is more general to be applied to various heterogeneous architectures of pre-trained models. We have revised this claim in the $\\underline{\\text{revised paper}}$. We further add an experiments in $\\underline{\\text{Section A.6 of the revised appendix}}$. We increase the amount of the pre-trained models and use 5 models on 5 tasks: Seaquest, Riverraid, UpNDown, SpaceInvaders and BankHeist, and then transfer them to the Alien task. Results in $\\underline{\\text{Figure 2(c) of the revised appendix}}$ show that Hub-Pathway can still outperform Zoo-Tuning with the increased size of the model hub. \n\n**Q9:** The effect of the hyperparameters.\n\nFor each dataset, we tune the hyper-parameters on one task and fix the hyper-parameters for other tasks. The hyper-parameters are not over-tuned and are efficient to select. We have already conducted an experiment on the influence of the hyper-parameter $k$ in $\\underline{\\text{Figure 3(c) of the original submission}}$. We add the experiment on the influence of $\\lambda$ in the table below (also plotted in $\\underline{\\text{Figure 2(a) of the revised appendix}}$). We conduct the experiments on the COCO and Cars tasks for image classification. We observe that the performance is stable around $\\lambda=0.3$. But without $\\lambda$ ($\\lambda=0$), the performance drops much, which indicates that the loss $L_\\text{explore}$ is needed in our method to enhance the exploration of the model hub. .\n\n| $\\lambda$ | 0.0 | 0.1 | 0.3 | 0.6 | 1.0 |\n| ------ | ------ | ------ | ------ | ------ | ------ |\n| COCO | 81.34 | 83.87 | 84.03 | 83.28 | 82.89 | \n| Cars | 89.75 | 92.34 | 91.90 | 91.39 | 91.16 |",
" We would like to sincerely thank Reviewer 8MEA for providing the detailed review and insightful suggestions. We have revised our paper and appendix accordingly.\n\n**Q1:** More details about the implementation and costs of the method.\n\nWe give more detailed complexity analysis and clarification of the proposed method.\n\n- The implementation of the approach.\n \nFor Hub-Pathway, we load all the models into the memory instead of switching the model in and out of the memory. For each data point, we compute the pathway weight with the pathway generator and then forward the data point through its top-k activated models. All these processes are executed in the memory without shuffling costs. So the computational costs of Hub-Pathway is about k=2 times of a single model, much smaller than the Ensemble method in $\\underline{\\text{Table 5 of the original submission}}$, or $\\underline{\\text{Table 2 of the revised appendix}}$.\n\n- The memory cost of storing multiple models is much smaller than forwarding data through models.\n\nTo validate it, we conduct the experiment on the classification task in CIFAR with a batch-size of 12, in which we (1) load the pre-trained models in the memory, (2) forward the data through the pathway generator, (3) forward the data through pre-trained models. We report the increase in memory costs (in Megabytes) of each step in the table below (These results are also included in the $\\underline{\\text{Table 4 of the revised appendix}}$). Comparing results in Column 1, the additional memory cost of storing multiple pre-trained models than one single model is about 300 M, which is relatively smaller than those brought by forwarding data through models (Column 3). This additional cost of storing more models is acceptable in most cases. \n\n| Method | Load Model | Forward Generator | Forward Models |\n| ------ | ------ | ------ | ------ |\n| Single | 897 | / | +1008 | \n| Ensemble | 1179 | / | +5218 |\n| Hub-Pathway | 1203 | +274 | +2060 |\n\n- The memory costs of forwarding data through models is controlled by Hub-Pathway. \n\nFrom the Column 3 of the table above, the memory cost of Ensemble scales up with the model size, to 5 times of the single model. The cost of Hub-Pathway is controlled by TopK selection with the pathway activation mechanism, and it will not become larger with even more pre-trained models. The pathway generator is lightweight, so its memory cost is small. Although the whole hub may be activated from the dataset view, each data point only goes through k models activated by it, controlling the memory and computation costs to about k times (a small k can achieve good performance as shown in $\\underline{\\text{Figure 3(c) of the original submission}}$). Besides, as discussed above, the memory cost of storing these models is also relatively small. In all, Hub-Pathway is a method to enable the efficient usage of multiple pre-trained models. \n\n**Q2:** Method performance compared with the ensembling approach.\n\nThe proposed Hub-Pathway framework outperforms the ensembling approach in almost all the tasks. As shown in $\\underline{\\text{Table 5 of the original submission}}$ or $\\underline{\\text{Table 2 of the revised appendix}}$, Ensemble outperforms the single model with 1.09%, which indicates that an improvement of 1.09% is non-negligible. Our method further outperforms the ensemble with 1.13%, which demonstrates that our method outperforms the ensemble with a non-negligible gap. Furthermore, our method is more efficient in computation for both training and inference. The pathway only activates a fixed number of models, which can scale up to the increasing size of the model hub while the ensemble method cannot.\n\n**Q3:** About swapping models.\n\nAs discussed in $\\underline{\\text{Qestion 1}}$, the cost of storing the models in the memory is acceptable in most cases and the memory cost of forwarding data is controlled by the pathway mechanism. So only in extreme cases with a limited memory budget, we need model swapping to sacrifice time for space, while in most cases, we load the pre-trained models in the memory without the additional costs of swapping models, which is also easy to implement. \n\n**Q4:** Error bars on results.\n\nWe followed the reported results in the Zoo-Tuning paper, so the error bars were not reported in the original submission. We have now followed the advice of the reviewers' and reported error bars of our experiments in the $\\underline{\\text{revised paper}}$. The variance is small compared to the improvement of the Hub-Pathway method.\n",
" - Dynamic computation:\n\n[1] Brandon Yang, Gabriel Bender, Quoc V Le, and Jiquan Ngiam. Condconv: Conditionally parameterized convolutions for efficient inference. In NeurIPS, 2019.\n\n[2] Yinpeng Chen, Xiyang Dai, Mengchen Liu, Dongdong Chen, Lu Yuan, and Zicheng Liu. Dynamic convolution: Attention over convolution kernels. In CVPR, 2020.\n\n[3] Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation, 1991.\n\n[4] William Fedus, Barret Zoph, and Noam Shazeer. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. JMLR, 2022.\n\n[5] Han, Yizeng and Huang, Gao and Song, Shiji and Yang, Le and Wang, Honghui and Wang, Yulin. Dynamic neural networks: A survey, TPAMI, 2021.\n\n**Q4:** Error bars of the experiments.\n\nWe followed the reported results in the Zoo-Tuning paper, so the error bars were not reported in the original submission. We have now followed the advice of the reviewers and reported error bars of our experiments in the $\\underline{\\text{revised paper}}$. The variance is small compared to the improvement of the Hub-Pathway method.\n\n**Q5:** Ablation studies across all test datasets.\n\nWe conduct the ablation study of Hub-Pathway across all 7 tasks and report the results in the table below. The observation is consistent across these datasets. The pathway mechanism and the explore and exploit losses all help Hub-Pathway to better utilize the knowledge in the model hub.\n\n| Method | CIFAR | COCO | Aircraft | Cars | Indoors | DMLab | EuroSAT |\n| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\n| w/ random path | 79.15 | 80.97 | 86.14 | 90.61 | 72.98 | 75.07 | 98.63 |\n| w/o explore | 80.02 | 81.34 | 85.72 | 89.75 | 73.06 | 75.78 | 98.59 |\n| w/o exploit | 78.32 | 80.10 | 86.38 | 90.59 | 72.02 | 75.95 | 98.65 |\n| Hub-Pathway | 83.31 | 84.36 | 87.52 | 91.72 | 76.91 | 76.47 | 99.12 |\n\n**Q6:** The total inference time for a single model.\n\nIn $\\underline{\\text{Table 5 of the original submission}}$, or $\\underline{\\text{Table 2 of the revised appendix}}$, we reported the inference time of Hub-Pathway when it is tested with batches of data with the batch-size of 12. In this author response, we give a more detailed analysis of the inference time of a single image. We report the time (seconds) of forwarding a single image through the pathway generator and the pre-trained models in the table below (These results are also included in the $\\underline{\\text{Table 3 of the revised appendix}}$). The inference time of Hub-Pathway is about 2 times of a single model, much shorter than Ensemble, because of the pathway activation mechanism. The inference time of the generator is also small, compared with pre-trained models.\n\n| Method | Generator | Pre-trained Models |\n| ------ | ------ | ------ |\n| Single | / | 0.012 | \n| Ensemble | / | 0.056 |\n| Hub-Pathway | 0.003 | 0.023 |\n\n**Q7:** The effect of $\\lambda$ and how it is chosen.\n\n$\\lambda$ controls the trade-off between $L_\\text{task}$ and $L_\\text{explore}$. $\\lambda$ is selected by the performance on one task in each dataset and we fix the chosen hyper-parameter for other tasks. We conduct an experiment on the influence of different $\\lambda$ on the performance and report the results in the table below (also plotted in $\\underline{\\text{Figure 2(a) of the revised appendix}}$). The experiments are conducted on the COCO and Cars tasks for image classification. We observe that the performance is stable around $\\lambda=0.3$. But without $\\lambda$ ($\\lambda=0$), the performance drops much, which indicates that the loss $L_\\text{explore}$ is needed in our method to enhance the exploration of the model hub. \n\n| $\\lambda$ | 0.0 | 0.1 | 0.3 | 0.6 | 1.0 |\n| ------ | ------ | ------ | ------ | ------ | ------ |\n| COCO | 81.34 | 83.87 | 84.03 | 83.28 | 82.89 | \n| Cars | 89.75 | 92.34 | 91.90 | 91.39 | 91.16 |\n\n**Q8:** The convergence of the model.\n\n We plot the training curve for the Cars task in the image classification dataset in $\\underline{\\text{Figure 2(b) of the revised appendix}}$. We compare the convergence speed of Hub-Pathway and a single ImageNet model. We observe that in the first stage Hub-Pathway converges a little slower because of pathway finding, but then it converges with a similar speed to the single ImageNet model.",
" Many Thanks to Reviewer 6QEr for providing the thorough insightful comments. We have revised our paper and appendix accordingly.\n\n**Q1:** The additional cost for using multiple pre-trained models and introducing the pathway generator.\n\nRecently, to improve model transferability, many works increase the model size and develop big models. We instead use the model hub to improve the model transferability. Both of these methods require larger storage and bring more costs for better performance. We want to clarify that even using multiple models, the additional costs can be controlled with properly designed transfer learning methods, and our method can achieve performance gains with acceptable costs.\n\nFirstly, we provided the complexity analysis in $\\underline{\\text{Table 5 of the original submission}}$, or $\\underline{\\text{Table 2 of the revised appendix}}$. It shows that:\n- The computation costs of using multiple pre-trained models can be controlled by Hub-Pathway, which do not scale with the size of the model hub.\n- The additional components of Hub-Pathway, including the pathway generator and aggregator, are lightweight and introduce only a few more parameters compared with the pre-trained models.\n\nSecondly, we delve into the question of storage cost or memory cost in this author response. We conduct the experiment on the classification task in CIFAR with a batch size of 12, in which we (1) load the pre-trained models in the memory, (2) forward the data through the pathway generator, (3) forward the data through pre-trained models. We report the increase in memory costs (in Megabytes) of each step in the table below (These results are also included in the $\\underline{\\text{Table 4 of the revised appendix}}$). We have the following observations:\n- The additional memory costs of storing the parameters of multiple models are relatively small compared with those brought by forwarding data through models. It is the latter one that damages the storage efficiency of using multiple models. [Explanation: Comparing results in Column 1, the additional memory cost of loading multiple pre-trained models is about 300 M, which is relatively smaller than those brought by forwarding data through models (Column 3).]\n- Hub-Pathway controls the memory costs of using multiple models. [Explanation: In Column 3, the memory cost of Ensemble scales up with the model size, to 5 times of the single model. The cost of Hub-Pathway is controlled by TopK selection with the pathway activation mechanism.]\n- The pathway generator is lightweight and brings few additional storage costs. [Explanation: In Column 1, the additional cost of storing the generator is negligible. Compared Column 2 with Column 3, the memory cost of forwarding through the generator is also relatively small compared with forwarding through the pre-trained models.]\n\n In all, Hub-Pathway is a method to enable the efficient usage of multiple pre-trained models. \n\n| Method | Load Models | Forward Generator | Forward Models |\n| ------ | ------ | ------ | ------ |\n| Single | 897 | / | +1008 | \n| Ensemble | 1179 | / | +5218 |\n| Hub-Pathway | 1203 | +274 | +2060 |\n\n\n**Q2:** The experimental details.\n\nWe clarify the key experimental details and the choices of the hyperparameters in $\\underline{\\text{Section 4 of the revised paper}}$.\n\n**Q3:** Some more related papers on dynamic computation and transfer learning.\n\nWe have added some classic or recently published related works in the fields of dynamic computation and transfer learning in $\\underline{\\text{Section 2 of the revised paper}}$. Please point out other important works we missed during the discussion period.\n\n- Transfer learning:\n\n[1] Wortsman, Mitchell, et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In ICML, 2022.\n\n[2] Bolya, Daniel and Mittapalli, Rohit and Hoffman, Judy. Scalable Diverse Model Selection for Accessible Transfer Learning. In NeurIPS, 2021.\n\n[3] Huang, Long-Kai and Huang, Junzhou and Rong, Yu and Yang, Qiang and Wei, Ying. Frustratingly easy transferability estimation. In ICML, 2022.\n\n[4] Rebuffi, Sylvestre-Alvise and Bilen, Hakan and Vedaldi, Andrea. Learning multiple visual domains with residual adapters. In NeurIPS, 2017.\n\n[5] Aghajanyan, Armen and Zettlemoyer, Luke and Gupta, Sonal. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. In ACL, 2021.",
" We would like to sincerely thank Reviewer 5zEn for providing the insightful review.\n\n**Q1:** How could Hub-Pathway be used for autoregressive models?\n\nThe main difference between autoregressive models and those explored in our experiments is that autoregressive models process a sequence of inputs and output a sequence recursively. This has expanded the design space of the Hub-Pathway framework, and it has the potential to be used for such models with properly designed pathway generators and aggregators. \n\n- Pathway generator: A simple implementation is to flatten and concatenate the input tokens as the inputs of the generator. It may be a better choice to employ a model with the ability of sequence modeling, like LSTMs or Transformers, as the pathway generator.\n- Aggregator: A direct choice is to use the vanilla aggregator, which waits for each autoregressive model's prediction $Y_i=\\Theta_i(X)$, where $Y_i=\\{y_i^1,y_i^2,\\cdots,y_i^{n}\\}$, with $y_i^k=\\Theta_i(y_i^{\\leq k-1},X)$, and aggregates all these output sequences in a sequence level: $Y=A(Y_1,Y_2,\\cdots,Y_m)$. For some autoregressive tasks with dynamic output lengths, we can also use such aggregators by padding the outputs of all models to fixed lengths. There may be other more complicated design choices. For example, it would also be possible to implement a recursive aggregator, which aggregate the outputs step by step: $y^k=A(y^{\\leq k-1}, y_1^{k},y_2^{k},\\cdots,y_m^k)$.\n\nThanks for pointing out this problem, and we think it would be an interesting topic to explore deeper in model hub transfer learning problems on the autoregressive tasks in the future work.\n\n\n**Q2:** How could the aggregator be set up for heterogeneous models?\n\nFor each pre-trained model, we remove the original task head and add a new target task head to map the original features to the target task output space. Because the models are utilized to solve the same downstream task together, their transformed outputs, which are the inputs of the aggregator, have the same fixed dimension. The aggregator then concatenates and aggregates these transformed outputs, so we just need the input dimensions of the aggregator to correspond to the transformed output dimensions. Thus, the different dimensions of heterogeneous models' original outputs would not be a problem for Hub-Pathway.\n\n\n**Q3:** The robustness against data drift.\n\nThe pathway weights are fine-tuned on the target task, which activates the models that are most suitable for the target data. So these activated models are robust to the data drift between the source and target domains. As the results in $\\underline{\\text{Figure 3(a)}}$, the ImageNet model, which is commonly considered as a general model for image classification, has advantages in the pathway weights, which shows that Hub-Pathway has the ability to find more robust models. This question introduces the topic of obtaining transferred models from the model hub that can also generalize to other similar tasks or domains, which is not the focus of our paper, where we focus on transferring to one target task. We think it is an exciting and valuable problem in the future work.",
" The paper proposes a method to transfer knowledge from multiple diverse models. First, it learns pathway routes that activate a different set of models depending on input data. Later, it aggregates the outputs to generate task-specific predictions. The authors propose an exploration loss to promote the utilization of all the models. Finally, the paper discusses the efficient fine-tuning of models on a subset of data that activates them. The paper claims state-of-the-art performance for model hub transfer learning on computer vision and reinforcement learning tasks. Strengths:\n- The paper is well-organized and clearly written.\n- The paper's primary idea to facilitate knowledge transfer from multiple pre-trained models is important.\n- The paper proposes a novel approach for transfer learning from multiple models. Hub-Pathway learns to identify top-k models best suited for input datum. The aggregator network generates a final prediction based on the output of k models and path weights. \n- The authors propose an interesting exploration strategy to improve the transferability and exploitation of activated models via selective fine-tuning. \n- The paper highlights the high efficiency compared to ensembles since not all models are active for both training/inference phases.\n\nWeaknesses:\n- The paper makes a broad claim on the effectiveness of the proposed method on heterogeneous model architectures. However, it's unclear how it could be used for autoregressive models where subsequent output tokes depend on the previous ones.\n- The paper discusses replacing output head layers with task-specific layers (#119-#120). It is unclear how aggregator network could be set up for heterogeneous models with different output dimensions. - Please refer to the weaknesses section. A bit more clarification on those aspects would be appreciated.\n- How Hub-Pathway fares against data drift compared to pre-trained models? Likely, some pre-trained models are more robust against data drift; however, Hub-Pathway fails to activate them? There could be limitations on what all heterogeneous model architectures Hub-Pathway could support. Please refer to the weaknesses section.",
" In this paper, the authors propose a transfer-learning method called Hub-Pathway. to leverage a library of pre-trained models instead of a single pre-trained model. The proposed Hub-Pathway utilized the idea of data-dependent transfer learning to find the best transfer learning path through multiple pre-trained models for each input. The output of each pre-trained model is then aggregated to produce the final prediction. The generator for generating the pathway is trained with softmax policy with additional noise to encourage exploration. The authors also propose exploitation loss to better leverage the activated model. Extensive experiments on multiple tasks demonstrate the benefits of the proposed approach. Strengths:\n\n1. The paper is generally well-written and the organization is clear. \n\n2. The idea of using multiple pre-trained models for transfer learning is interesting and the authors also propose a well-designed method for leveraging multiple models.\n\n3. The experiments are extensive which show the effectiveness of the proposed method. \n\n\nWeaknesses:\n1. The additional storage is an overhead for using multiple pre-trained models for transfer learning. Also, the introduction of the pathway generator is another additional cost. \n\n2. The key experimental details should be included in the main text rather than in the supplementary. The choices of the hyperparameters are also not clearly specified. \n\n3. Several related papers on dynamic computation and transfer learning are not cited.\n\n4. Although the authors stated that the experiments are repeated for 3 times, no error bar is reported. \n\n###########################\n\nPost-rebuttal:\n\nThanks for the rebuttal. Most of my concerns are addressed in the rebuttal. I encouraged the authors to include the additional results in the final draft to improve the significance of the paper. I increase the score from 4 to 5. 1. In Table 4, the ablation studies are only considered on COCO and Cars, I am wondering if the observation is consistent across all the test datasets. \n\n2. What's the total inference time (pre-trained models + pathway generator) for a single image? \n\n3. What's the effect of $\\lambda$ and why $\\lambda$ is 0.3?\n\n4. Would the convergence of the model become slower because of the introduction of pathway finding? Yes",
" This paper tackles the problem of transfer learning from a zoo / hub of pre-trained models unto a specific end-task. \nThey treat the problem as data-point dependent and route each data-point through an adaptive subset of the available pre-trained models. \nThey also fine-tune the pre-trained models on the data-sets that activate them. \nThis paper compares to some relevant baselines and shows improvement over them. **Strengths**\n1. Method works for heterogeneous architectures which is not the case for previous approaches \n2. Improved performance over reasonable baselines\n3. Method is relatively easy to understand\n\n\n**Weaknesses**\n\n1. Though the authors claim that the inference and training time cost is better for hub-transfer over ensembling, I am not 100% convinced. Note that the hub-transfer approach introduces significant overhead in terms of shuffling models in and out of memory as data-points are adaptively assigned. Also - with the exploration bonus (and also early in training), it stands to reason that a batch of data could activate all models (though each data-point activates an individual subset but the union could be the whole hub) which presents a computational / memory hurdle to handle efficiently. Can the authors provide more details about how the approaches were implemented and benchmarked for Table 5 ?\n2. Method performance (in the case of out-of-domain transfer) is on par with the ensembling approach which may be simpler to implement \n3. Seems technically complex to implement - would involve swapping models in and out of memory unless one has access to a large memory budget.\n4. No error bars on results even though experiments were run for 3 seeds\n\n\n\n**Update after rebuttal**\nScore updated from 4->6 after rebuttal discussion 1. Could you provide error bars on your experiments since you have 3 runs per experiment\n2. Are all the inputs similarly structured ? Do all the models expect the same type of input structure - no discussion of adapting model input to task structure.\n3. Why add to the computational cost by using a randomized sub-network G_n -> why not just do something like temperature annealing ?\n4. How about a domain expert baseline ? Assuming oracle access to the best single model apriori - can you update the tables with what the performance would be ? This is the baseline of an expert who knows how to pick the best single model for each new task. \n5. For the RL experiments (Fig 2), the author say \"We observe that the proposed Hub-Pathway outperforms these state-of-the-art methods\" - however the errorbars with the ZooTuning experiments clearly overlap. Am I missing something ? 1. The method introduces extra hyper-parameters over methods like ensembling",
" This study proposes Hub-Pathway, a method for maximizing the knowledge gained from multiple pre-trained deep neural network models in a transfer learning problem setting. The basic idea of the Hub-pathway is to select the best subset of models for each datum from the model hub to make predictions. There are two main challenges of this approach: (i) how to select the best path for each data set and (ii) how to aggregate knowledge from multiple models. For the first challenge, the paper introduces a gating function with randomness to achieve a variety of path options for each datum. For the second challenge, the paper develops a mechanism that combines the aforementioned gate function with a function for knowledge aggregation to output a prediction result for the hub as a whole. In the experiments, the paper compared the proposed method with prior methods in the tasks of classification, facial landmark detection, and reinforcement learning, and showed that the proposed method achieves superior performances by dynamically routing paths on model hubs. ### Strength\n* Research questions and motivations are clear.\n* The proposed method is quite simple so it is easy to reimplement.\n* The proposed method can handle heterogeneous sets of models, unlike prior methods.\n* Comparison with prior methods in a variety of experimental settings.\n\n### Weakness\n* Insufficient explanation and evaluation of the need to change paths for each datum.\n* A little lack of evaluation in a homogenous setting.\n* Larger memory size and execution time required during inference time.\n\nThe paper is well-structured and clearly describes the research questions and motivations. The proposed method provides new insights into this research area by achieving knowledge aggregation from heterogeneous model hubs, which has been difficult before, and by showing better experimental results than models in homogeneous settings. On the other hand, comparisons with state-of-the-art methods and verification of the validity of switching paths on a datum-by-datum basis are insufficient (these will be discussed in Questions). In addition, the proposed method arises new challenges in terms of memory size and execution time during inference. However, these issues might not be considered serious enough flaws to damage the main argument of the paper. ### Q1. Do we need to change the path for each datum?\nOne of the key ideas in this paper is to efficiently extract knowledge from pre-trained models by switching paths for each datum. We have a concern on this point. That is, wouldn't it be sufficient to just select a model for each dataset? We could not validate the proposed method of switching on each datum because the evaluation in the current paper analyzed only the performance for each target dataset. Rather than holding a large number of models and training them simultaneously, a more realistic setting in terms of inference cost would be to test a set of fine-tuned models on the target dataset and select Top-K models by validation scores to ensemble. In order to argue for the validity of switching on each datum, we believe that a comparison with such a method of switching models on each target dataset is necessary. This evaluation can easily be done using the single model that appeared in Tables 1 and 2.\n\n### Q2. How is the performance of the proposed method compared to ModelSoups on homogeneous?\nThis is related to Q1. In the homogeneous setting, a recent paper presented a method to aggregate the knowledge in multiple models called ModelSoups[1], which averages weights of fine-tuned models and applies the averaged weights to a single model. ModelSoups also adopts a Top-K selection technique as well as hub-pathway and thus it can be considered a model aggregating knowledge for each target dataset level discussed above. Obviously, we recognize that ModelSoups is not a perfect competitor to hub-pathway since it cannot be applied to heterogeneous settings, but we consider a comparison with ModelSoups would be helpful for investigating the characteristics of hub-pathway.\n\n```\n[1] Wortsman, Mitchell, et al. \"Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time.\" International Conference on Machine Learning (2022).\n```\n\n### Q3. About experimental settings of the image classification task\nIn the image classification experiments, the number of training epochs for hub-pathway and the other competitors did not seem to be described. Does hub-pathway require more training time than other methods?\n Nothing to report."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
5
] | [
"CrCDkj8cjGL",
"gwr53nFE0St",
"IfLd_E8SpNo",
"sXXkI7jCY-o",
"B7P8nHXMP2G",
"hBeNdgGU0Jj",
"zfPPXbcvjjC",
"WU568XNnBgc",
"QX_gju7Utx8",
"QX_gju7Utx8",
"AlBVBHHhzld",
"szBTdYCgwSq",
"k00HPMBPmOL",
"ha85E6HloL",
"UZhDxIUX-sZ",
"X0GuEKBzoOC",
"fbXyWBj6rt9",
"fbXyWBj6rt9",
"sXXkI7jCY-o",
"sXXkI7jCY-o",
"iaeZyFSkq-U",
"nips_2022_L8ESR8IQ7Gb",
"nips_2022_L8ESR8IQ7Gb",
"nips_2022_L8ESR8IQ7Gb",
"nips_2022_L8ESR8IQ7Gb"
] |
nips_2022_rcrY85WLAKU | Cost-efficient Gaussian tensor network embeddings for tensor-structured inputs | This work discusses tensor network embeddings, which are random matrices ($S$) with tensor network structure. These embeddings have been used to perform dimensionality reduction of tensor network structured inputs $x$ and accelerate applications such as tensor decomposition and kernel regression. Existing works have designed embeddings for inputs $x$ with specific structures, such as the Kronecker product or Khatri-Rao product, such that the computational cost for calculating $Sx$ is efficient. We provide a systematic way to design tensor network embeddings consisting of Gaussian random tensors, such that for inputs with more general tensor network structures, both the sketch size (row size of $S$) and the sketching computational cost are low.
We analyze general tensor network embeddings that can be reduced to a sequence of sketching matrices. We provide a sufficient condition to quantify the accuracy of such embeddings and derive sketching asymptotic cost lower bounds using embeddings that satisfy this condition and have a sketch size lower than any input dimension. We then provide an algorithm to efficiently sketch input data using such embeddings. The sketch size of the embedding used in the algorithm has a linear dependence on the number of sketching dimensions of the input. Assuming tensor contractions are performed with classical dense matrix multiplication algorithms, this algorithm achieves asymptotic cost within a factor of $O(\sqrt{m})$ of our cost lower bound, where $m$ is the sketch size. Further, when each tensor in the input has a dimension that needs to be sketched, this algorithm yields the optimal sketching asymptotic cost. We apply our sketching analysis to inexact tensor decomposition optimization algorithms. We provide a sketching algorithm for CP decomposition that is asymptotically faster than existing work in multiple regimes, and show the optimality of an existing algorithm for tensor train rounding.
| Accept | Summary:
The major strength is that the sketch size is polynomial in the number of modes for tensor train. This was not known in previous work, for example in the paper by Rakhshan and Rabusseau https://arxiv.org/abs/2003.05101 which is reference 38 and gets a sketch size which is exponential in the number of modes. There is concurrent work that also gets this here: https://arxiv.org/abs/2207.07417
Theorem 4.3 seems like it could be of independent interest. It shows that under reasonable assumptions, assuming the contraction order of the data tensor is fixed, if the embedding is a tree, then this gets the optimal running time - more complex embeddings are not needed (even if the data tensor has cycles). Section 4 is focused on giving an algorithm such that, given a data tensor and an embedding, and a fixed contraction order for the data tensor, applies the embedding to the data tensor with the smallest possible asymptotic running time. Not sure if previous work has studied this question, however.
One weakness is that the works doesn’t seem to compare to previous work related to Section 4, or running time for subspace embeddings for tensor networks. Their CP decomposition algorithm is also incomparable to the other CP decomposition algorithms they mention, since their per-iteration running time for ALS has a 1/eps^5 term. Perhaps if s (the size of each dimension of the tensor) is much bigger than N (the number of modes) or R (the rank), then the second term in the running time should dominate, meaning that the algorithm in this paper would still be significantly better than Recursive LSS (by a factor of NR as they mention in page 8), so this might not be a significant weakness.
Evaluation:
Based on the reviews and my understanding, I think this meets the bar for NeurIPS. The sketch size for tensor train and the application to CP seem useful, and tensor train rounding seems to be a strong motivation. The significance of Section 4 is not completely clear, but the other results seem to be enough by themselves.
| train | [
"WBcvgLfrJgY",
"r9pR8EC_jno",
"yBqrqWHqkBK",
"0_1e2f6DIEO",
"-sqntrDjLOo",
"F1h__Iu71M5",
"kl_qUJOM6Kd"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for the constructive feedback and great questions! Our comments to your suggestions are as follows:\n\nQ: The intro of the alg details (sec. 4) is hard to follow, and the meaning of U in (4.2) is not clear:\n\nA: Thanks! $(U_i,V_i)$ in (4.2) represents the contraction of two intermediate tensors represented by two subsets of vertices $U_i,V_i\\subset V_D$, as explained in line 760 of the appendix. We will move detailed explanations of the alg in appendix C to the main text in the revised version.\n\nQ: The assumption that the TN ranks should be smaller than the mode dimensions seems not reasonable in practice.\n\nA: Thanks! The paper does not assume that TN ranks are smaller than mode dimensions. Sec. 3 and 4 analyze general tensor network input data. One major contribution is that we proposed the sketching lower bound analysis and an algorithm to sketch so that the bound can be reached.\n\nOur alg is more efficient than existing ones for the low TN rank case, and comparable for the high-TN rank case where the existing algorithm already matches the cost lower bound and is efficient. Also, this is the first effort to offer a thorough lower-bound analysis, which helps build efficient sketching algorithms.\n\nQ: As to the structure of each small TN:\n\nA: Thanks! Due to page limits, we explain how to choose each small TN's structure in Appendix C.1. Each TN has 2 tensors to decrease the computational cost. If another topology is used, the asymptotic computational cost isn't optimal. If each TN only includes 1 tensor, the embedding is only computationally efficient for certain data structures, as shown in line 300, \"Tree tensor network embedding efficiency.\"\n\nQ: Could you give more interpretation, such as an understandable example, for Definition 1, the constrained contraction tree?\n\nA: Thanks! Consider a tensor network with three tensors, $A,B,C$, and a given contraction tree $T_0$ that is $((A,B),C)$, meaning $A$ contracts with $B$ first, then with $C$. Consider another tensor network consisting of $A,B,C$ and another tensors $X$. Then the contraction tree $(((A,B),X),C)$, $(((A,X),B),C)$ and $(((A,B),C),X)$ all are constrained on $T_0$, since the contraction ordering of $A,B,C$ remain the same. However, the contraction tree $(((A,C),X), B)$ is not constrained on $T_0$.\n\nQ: It would be appreciated if the authors can give more interpretation about the Defs for ${D}(e_1),S,I$ given in lines 241-250. \n\nA: We'll include the example below in the revised version to help readers understand. Consider an input data with 5 tensors, $v_1,v_2,v_3,v_4,v_5$, where $v_1$ has a dimension $e_1,v_2$ has a dimension $e_2,v_3$ has a dimension $e_3$, and dimensions $e_1,e_2,e_3$ are to be sketched. $v_4,v_5$ have no dimension to be sketched. Consider the contraction tree $(((v_1,v_4),(v_2,v_5)),v_3)$, where $v_1$ contracts with $v_4$ and outputs $v_{1,4},v_2$ contracts with $v_5$ and outputs $v_{2,5}$, and then $v_{1,4},v_{2,5}$ contract together into $v_{1,2,4,5}$, and $v_{1,2,4,5}$ contracts with $v_3$. For this example, $D(e_1)$ has $(v_1,v_4),D(e_2)$ has $(v_2,v_5),D(e_3)=\\emptyset$, and $I=\\emptyset$. The remaining contractions are all in set $S$, which contain $(v_{1,4},v_{2,5})$ and $(v_{1,2,4,5},v_3).$\n\nQ: Suppose the data TN itself is contractible, is the whole sketching of the proposed method contractible as well?\n\nA: Yes, contractibility is preserved under sketching. For each contraction in the original network, in our algorithm, we either preserve this contraction or introduce a sketching matrix and contract at a lower cost. We will explain this in the revised version.\n\nQ: The difference between #P-complete and #P-hard.\n\nA: They are not equivalent, #P-hard problems may not be #P, and a problem is #P-complete when it's both #P and #P-hard (as with any complexity class).\n\nQ: In line 398, why are the entries of the tensor drawn from the uniform distribution, rather than Gaussian? \n\nA: We observe uniform and Gaussian distributions with the same mean sketch similarly. We will add this in the revised version.\nWhen sketching tensor train input data with order 6 and dimension size 500, to reach a relative error of 0.2, the sketch size mean across 250 experiments are as follows,\n\n|uniform|Gaussian\n\nTN embedding|85.0|78.1\n\nTree embedding|75.4|68.1\n\nTT embedding|49.3|45.4\n\nQ: Could you evaluate the sketching dimension numerically by varying the parameter in a smaller range? Although the authors mentioned that the setting of $\\epsilon$ to be 0.1-0.2 might be good, I think it depends on the specific task. \n\nA: Thanks! Below we show the sketch size-error relation with TN embedding. We can see the asymptotic scaling of sketch size is $O(1/\\epsilon^2)$. Setting $\\epsilon$ to be 0.1-0.2 would be good for CP-ALS, and it might not be suitable for all applications. We will explain this in the revised version.\n\nerror|sketch size\n\n0.8|11.54\n\n0.4|30.65\n\n0.2|85.265\n\n0.18|99.165\n\n0.14|124.51\n\n0.1|194.09",
" We would like to thank the reviewer for the constructive feedback! Our comments to your concerns are as follows:\n\n Q: The theoretical part of this paper discussed a very general tensor network embedding rooted in the hypergraph presentation. But in the later part, they barely discussed it. As for me, such an arrangement is quite confusing and disconnected.\n \n A: Thank you for the suggestion! In this paper, we consider the case where the input tensor network DATA has a general hypergraph structure, while the EMBEDDING has a graph structure, rather than a general hypergraph structure. All the analysis, applications as well as experiments follow this. We will address this in the revised version of the paper to avoid confusion. \n \nQ: This paper is more focused on the sketching size. Following the hypergraph setting, the graph structure is very important to the embedding. Currently, the author assumes the graph structure is given, but most of the time, the structure is unknown. The paper did not further discuss that. \n \nA: We consider the following setting in the paper:\n \nThe input DATA with its tensor network structure is given, and the tensor network EMBEDDING structure is chosen based on the data to minimize the sketching cost. Therefore, we didn't assume a given embedding structure in our paper. The embedding graph structure is chosen automatically in Algorithm 1. We also believe that the setting where the input data tensor network structure is known ahead of time is reasonable and common.\n\nNote that we are not considering the altogether different problem of finding a good tensor network decomposition of a single tensor. \n \nQ: The computational cost is efficient on the per-iteration level. It still relies on an exhaustive search. The overall computational cost can not be effectively measured. \n\nA: Thank you for the question! The overall computational costs have two parts: the sketching part (I think this is the part the reviewer mentioned as \"per-iteration level\"), and the part to decide the embedding structure. Consider the case when there are $N$ contractions in the input data, then the second part has a cost of $O(N)$, which is negligible compared to the first part. We will add this explanation to the revised version of the paper.\n \nAs to the comment of \"it still relies on exhaustive search\": line 263-265 say that \"The value of $k(e_j)$ is selected via an exhaustive search over all $|{D}(e_j)|$ contractions\". Note that since $|{D}(e_j)|$ is upper-bounded by the number of contractions $N$, this exhaustive search has a cost of $O(N)$, thus is still efficient.\n \nQ: Experimental part is lacking. The author only conducted a general comparison in algorithm 1 but did not show experimental results with baseline methods.\n \nA: We consider multiple baseline algorithms in our experimental section. When comparing sketching tensor train inputs, the tensor train embedding is the baseline and is proposed in [10]. When comparing sketching Kronecker product inputs, the Khatro-Rao product embedding is the baseline and is proposed in [38]. We will explain that Khatro-Rao product embedding is the baseline in the revised version of the paper. \n \nWe also add additional experiments that compare CP-ALS and TT rounding using Algorithm 1 with other baseline sketching algorithms for these two applications. Please refer to our response to general questions.\n\nQ: Though most questions are in weakness. I do curious that, is there a good way to assume the hypergraph structure of the tensor network embedding? \n \nA: As mentioned, we consider hypergraph tensor network inputs (e.g., for sketching optimization with CP decomposition), but only consider graph embeddings, for which we can generally bound accuracy.",
" We thank the reviewer for the valuable feedback! Our comments to your questions are as follows:\n\nQ: The authors conducted several synthetic analyses to justify the theoretical results. However, I think it’s better to also include some real data experiments to show the usefulness of the proposed model.\n \nA: Thank you for the comments! Please refer to our response to general questions for the additional experiments. For both CP-ALS and TT-rounding, we use a real dataset as the input tensor to justify the efficacy for real cases. These experiments will be added to the revised version of the paper.\n\nQ: The authors presented two applications of the proposed algorithm, i.e., CP-ALS and TT-rounding. However, they did not conduct experiments on these two applications. I think it is very interesting to show how these algorithms perform in experiments.\n\nA: Thank you for the comments! These experiments are presented in our response to general questions.\n\nQ: In this work, the authors aim to study sketching for input data of general TN structure. However, in the experiments, they only used TT and Kronecker structures, both of which are very simple TN structures. Did the authors conduct experiments on more general TNs?\n\nA: In this work, we didn't perform experiments on more general TNs, since this work is the first to present an efficient algorithm to sketch arbitrary TNs, and there is no existing baseline we can compare to. We leave the detailed high-performance implementation of the algorithm for general TNs as future work.",
" We would like to thank all the reviewers for the valuable feedback.\nOur comments to the general questions from the reviewers are as follows.\n\nReviewers eUJi and QwJS asked about additional experiments on applications mentioned in the paper: alternating least squares for CP decomposition and tensor train rounding. We provide additional experiments below to show that our proposed sketching algorithms achieve similar accuracy as state-of-the-art sketching techniques. We will include these results in the revised version of the paper. Note that our proposed sketching algorithms also yields lower or comparable computational cost compared to the baseline sketching techniques, as is already discussed in the paper.\n\n1. For CP-ALS, we perform experiments on a Time-Lapse hyperspectral radiance image (Sérgio MC Nascimento, Kinjiro Amano, and David H Foster. Spatial distributions of local illumination color in natural scenes), which is a 3-D tensor with size $1024 \\times 1344 \\times 33$. We use this to also justify the usefulness of our method in real cases. We compare the standard CP-ALS, sketched CP-ALS with Algorithm 1 in this work, and sketched CP-ALS with leverage score sampling proposed by Larsen and Kolda in \"Practical leverage-based sampling for low-rank tensor decomposition\". We run experiments with 10 ALS iterations, and the output CP decomposition accuracy of these methods under different CP ranks and sketch sizes are as follows:\n\nCP rank: 2 | 5 | 10\n\nsketch size: 25 | 64 |100\n\nCP-ALS: 0.737 | 0.804 | 0.838\n\nCP-ALS with out sketching method: 0.737 | 0.770 | 0.801\n\nCP-ALS with leverage score sampling by Larsen and Kolda: 0.739 | 0.773 | 0.789\n\nAs can be seen from the results above, the CP-ALS output accuracy with our method has comparable accuracy with the sketched CP-ALS algorithm with leverage score sampling. As is analyzed in 347-355 in the paper, our algorithm also has better complexity compared to the baseline algorithm, especially when the CP rank is low and the tensor dimension is large.\n\n2. For tensor train rounding, we also perform experiments on data from the Time-Lapse hyperspectral radiance image dataset. We use 9 images from the dataset, and reshaping the input data to an order 6 tensor with size $9 \\times 32 \\times 32 \\times 28 \\times 48 \\times 33$. We then use the TensorLy library to truncate the input tensor to a tensor train with a rank of 30 and then benchmark the accuracy of different methods on top of this tensor train.\n \n With different TT rounding rank thresholds and different sketch size, the truncated tensor train accuracies are presented as follows:\n \n TT rounding rank: 1 | 4 | 11 | 20\n \n sketch size: 4 | 9 | 16 | 25\n \n TT-SVD: 0.734 | 0.862 | 0.944 | 0.981\n \n sketching with Algorithm 1: 0.527 | 0.761 | 0.866 | 0.948\n \n sketching with MPS embedding: 0.573 | 0.757 | 0.882 | 0.951\n \n As can be seen from the results above, sketching with our method (Algorithm 1) has comparable accuracy with the baseline algorithm (sketching with MPS embedding). In addition,\n As is analyzed in 362-379 in the paper, our algorithm also has similar complexity compared to sketching with MPS embedding,\n meaning that both methods have similar accuracy and computational cost performance in TT rounding.\n ",
" This paper discusses a sketching method for tensor network (TN) structured input data. Unlike previous methods mainly focusing on specific TN structures of the input data, this paper considers more general structures. To alleviate the exponentially large sketching size, the authors impose TN structure on the coefficients. Theoretical guarantees are provided. Finally, the authors also present two applications of the proposed model, including CP-ALS and TT-rounding.\n\nFor empirical results, they conducted several synthetic data analyses to justify the theoretical results.\n Strengths\n\n1. While previous related works focusing on specific input structure, this work studies a sketching algorithm for general TN structures. This may be useful for more applications. It also improves studies in such fields.\n\n2. The authors also present some theoretical results, which may be beneficial to future work.\n\n3. Based on the proposed algorithms, the authors improve the CP-ALS and TT-rounding algorithms, which are two popular tensor decomposition algorithms used in many applications. So the proposed model may have impacts on potential applications.\n\n\nWeaknesses\n\n1. The authors conducted several synthetic analyses to justify the theoretical results. However, I think it’s better to also include some real data experiments to show the usefulness of the proposed model.\n\n2. The authors presented two applications of the proposed algorithm, i.e., CP-ALS and TT-rounding. However, they did not conduct experiments on these two applications. I think it is very interesting to show how these algorithms perform in experiments.\n\n3. In this work, the authors aim to study sketching for input data of general TN structure. However, in the experiments, they only used TT and Kronecker structures, both of which are very simple TN structures. Did the authors conduct experiments on more general TNs?\n See above. The authors addressed some limitations. I think one limitation is that they should show some real applications to show the usefulness of the proposed model.",
" This paper discussed the tensor network embedding. Specially, they focus on the problem to derive a sequence of sketching matrices. They generalized the embedding accuracy in Theorem 3.1, w.r.t the sketch size of the matrix. They later proposed an efficient sketching algorithm which enjoyed batter per-iteration cost than previous methods. Strengths:\n1. The theoretical analysis is strong. Especially theorem 3.1 which discussed the accuracy of the tensor network embedding.\n2. The proposed sketching method has less computational cost per iteration.\n3. This framework showed broad application on different tensor decompositions like CP decomposition and tensor train embedding.\n\nWeaknesses:\n1. The theoretical part of this paper discussed a very general tensor network embedding rooted in the hypergraph presentation. But in the later part, they barely discussed it. As for me, such arrangement is quite confusing and disconnected. \n2. This paper is more focused on the sketching size. Following the hypergraph setting, the graph structure is very important to the embedding. Currently the author assumes the graph structure is given, but most of times, the structure is unknown. The paper did not further discuss that.\n3. The computational cost is efficient on per-iteration level. It still relies on exhaustive search. The overall computational cost can not be effectively measured.\n4. Experimental part is locking. The author only conducted general comparison in algorithm 1, but did not show experimental results with baseline methods. Though most questions are in weakness. I do curious that, Is there a good way to assume the hypergraph structure of the tensor network embedding?\n\n N/A",
" This paper investigated an efficient sketching method for embedding the tensor with tensor network (TN) structures into lower-dimensional spaces. Compared with the existing works, the main contribution of this paper includes the points as follows:\n\n- The authors established a systematic framework regarding how to sketch a TN with a lower-dimensional representation.\n- The authors carefully discussed the computational cost of the proposed method with necessary proofs for demonstrating the efficiency.\n- Two potential applications are introduced: a) ALS-CP algorithm; and b) tensor train rounding algorithm. **Strengths:**\n\n- (originality) This work reveals the potential advantage regarding of how the inherent TN structure can be leveraged to improve the sketching performance.\n- (originality) The joint discussion of the effective sketching with computational cost (modeled by the contraction tree) is very interesting.\n- (quality) The theoretical discussion of the paper on complexity analysis is remarkable.\n\n**Weaknesses:**\n\n- (clarity) The paper is relatively well written, but the introduction of the algorithm details (sec. 4) is very hard to follow, even though I carefully go through the appendix. For example, the meaning of the letter U in Eq. (4.2) is not clear. I finally found its definition in the appendix but it should be well defined in the manuscript.\n- (significance) A fundamental assumption of the work — the TN ranks should be smaller than the mode dimensions — seems not reasonable in practice. It might be true for some tensor decomposition models such as CP and Tucker for lower-order tensors. In the sense of TN, one typically has to be faced in practice with lower mode dimension (only 2 or 3) but higher ranks (e.g., 100). In this case (also acknowledged by the authors in the paper), the improvement by the proposed method becomes not so significant. The numerical results in the paper also verify this point. 1. As shown in Figure 4, the embedding network is a binary tree-structured, of which the vertices are repeatedly represented by a small TN. How do you determine the structure (e.g., bond dimension, network topology) for these small TNs? What happens if another topology is applied as an alternative?\n2. Could you give more interpretation, such as an understandable example, for Definition 1, the constrained contraction tree?\n3. It would be appreciated if the authors can give more interpretation about the Defs for $D(e_1)$, $\\mathcal{S,I}$ given in lines 241-250.\n4. Regrading the computational cost, suppose the data TN itself is contractible, meaning that the contraction cost is polynomial to the tensor dimension, I am wondering if the whole sketching of the proposed method is contractible as well? In other words, is the contractibility preserved under sketching?\n5. What is the difference between #P-complete and #P-hard? Are they equivalent? \n6. In line 398, why are the entries of the tensor drawn from the uniform distribution, rather than Gaussian?\n7. If possible, could you evaluate the sketching dimension (or compression ratio) numerically with varying the parameter $\\epsilon$ in a smaller range? Although the authors mentioned that the setting of $\\epsilon$ to be 0.1-0.2 might be good, I think it depends on the specific task. The main limitation of this work is that the proposed method seems only to work superiorly when the data TN is very low-rank. It might not be suitable for most tasks in practice."
] | [
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
2,
2,
4
] | [
"kl_qUJOM6Kd",
"F1h__Iu71M5",
"-sqntrDjLOo",
"nips_2022_rcrY85WLAKU",
"nips_2022_rcrY85WLAKU",
"nips_2022_rcrY85WLAKU",
"nips_2022_rcrY85WLAKU"
] |
nips_2022_jPx7vYUNUCt | Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators | The spectacular successes of recurrent neural network models where key parameters are adjusted via backpropagation-based gradient descent have inspired much thought as to how biological neuronal networks might solve the corresponding synaptic credit assignment problem [1, 2, 3]. There is so far little agreement, however, as to how biological networks could implement the necessary backpropagation through time, given widely recognized constraints of biological synaptic network signaling architectures. Here, we propose that extra-synaptic diffusion of local neuromodulators such as neuropeptides may afford an effective mode of backpropagation lying within the bounds of biological plausibility. Going beyond existing temporal truncation-based gradient approximations [4, 5, 6], our approximate gradient-based update rule, ModProp, propagates credit information through arbitrary time steps. ModProp suggests that modulatory signals can act on receiving cells by convolving their eligibility traces via causal, time-invariant and synapse-type-specific filter taps. Our mathematical analysis of ModProp learning, together with simulation results on benchmark temporal tasks, demonstrate the advantage of ModProp over existing biologically-plausible temporal credit assignment rules. These results suggest a potential neuronal mechanism for signaling credit information related to recurrent interactions over a longer time horizon. Finally, we derive an in-silico implementation of ModProp that could serve as a low-complexity and causal alternative to backpropagation through time. | Accept | Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators
The authors propose a biologically plausible method for temporal credit assignment called ModProp. They apply their framework on rate-based recurrent neural networks (RNNs), and show that it outperforms previous approaches.
All reviewers acknowledge that this work studies an interesting topic in computational neuroscience. The authors present compelling experimental results on synthetic data sets.
Weaknesses:
- Long-term dependencies cannot be tackled by this approach, which is quite common also for related approaches.
- Somewhat weak experimental evaluation. Evaluation on more complex standard data sets would be beneficial.
- The arguments for the used approximations is left in the appendix.
In general, an interesting study with good experimental results. I propose acceptance. | train | [
"U4CTdYXjnb8",
"2Cc9uGi9FV",
"jvJ31NsbWS",
"zF3RSqy9Iqu",
"S8walkrW75l",
"EUugZmzV58s",
"6i9x-8ohuvl",
"xWC-8XpDtKt",
"xY7zSa_zSSC",
"EdvAz5nCGsV",
"uoQLvw_FJtK",
"Ohv9vEkCu7Z",
"GIUXNPORPEm",
"TriNcsolIl1L",
"nHsR12K3SJU",
"EzgIWeV72Rp",
"avMofTyYkyN",
"elzW55GFmgJ",
"0-56s89er3y",
"4MszX1b-X6L",
"iwIi45rVMNY",
"seZcY92TfoF",
"wiUydbJt9xE",
"yR037qddZov",
"vDUy5jmiO-T",
"gIkEZrrxzBi",
"Z8EbupXOFME"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We have updated the submission to remove all blue coloring of texts on August 9th. ",
" We are very grateful to this reviewer for reading our response and taking that into consideration to revise their score. We thank the reviewer again for all their constructive feedback that led to the improvement of this paper. ",
" Thank you for the response to my questions and the addition of the suggested experiment. \nI have raised my overall rating by 1 point. ",
" Great. We thank the reviewer for this update. ",
" We are very grateful to this reviewer for careful reading of our response and taking that into consideration to revise their score. We agree with the reviewer that the paper would have improved more drastically if harder tasks and stronger benchmarks were used. We would also like to make a side note for future readers of OpenReview (not for the purpose of this review) that the main reason why we did not include SnAp-2 in our comparisons was due to its biological plausibility concerns explained in Appendix C (we focused on comparing against learning rules that are biologically plausible). We thank the reviewer again for all their constructive feedback that led to the improvement of this paper. ",
" Thank you. I have updated the contribution score to 2 to better reflect my overall score.",
" Thank you for the replies to my questions and for the new experiments.\nThe paper would have improved more drastically if the authors had presented results for non-synthetic (toy) tasks and stronger relevant baselines such as SnAp-2. I'm therefore not yet comfortable in recommending this paper strongly for acceptance, but I have raised my score by one point.",
" We very much appreciate the reviewer for recognizing the significance and interest of this work to the NeurIPS community. We would also like to extend our gratitude toward the reviewer for carefully reading our response and take that into consideration for the score revision. Moreover, improving the paper presentation (while meeting the nine-page constraint) is crucial and we really appreciate all this reviewer’s valuable and specific feedback to help us in that direction. \n \n**Why is $\\mathbb{E} W_{i_1, …, i_{s-1}} = \\frac{1}{N^s} (W^s)_{jp}$? (I’m assuming $N^S$ should be $N^s$).**\n\nConsider first the $s=1$ case, which is equivalent to the definition of the product of two matrices: $(W)^1_{jp} = W_{jp} = \\sum_{i_1} W_{j i_1} W_{i_1 p}$.\n \nIn the general case, $(W)^s_{jp} = \\sum_{i_1,\\ldots,i_{s-1}} W_{j i_1} W_{i_1 i_2} \\ldots W_{i_2 i_{s-1}} W_{i_{s-1} p}$. Recognizing the summand in the right hand side is equal to the definition of the $W$-chain $W_{i_1,\\ldots,i_{s-1}}$, we have\n\\begin{equation}\n(W)^s_{jp} = \\sum_{i_1,\\ldots,i_{s-1}} W_{i_1,\\ldots,i_{s-1}},\n\\end{equation}\nwhich is $N^{s-1}$ times the empirical estimate of $\\mathbb{E} W_{i_1,\\ldots,i_{s-1}}$.\n \nWe also thank the reviewer for bringing up the typo, as this made us realize that the normalizing denominator should be $N^{s-1}$ instead of $N^s$ (the indices go up to $i_{s-1}$. Thus, we have replaced all $N^S$ with $N^{s-1}$ in the manuscript. \n \n**(S20): Are you assuming $\\mathbb{E} h_{i_1, …, i_{s-1}} = \\frac{1}{N^s} \\sum_{i_1, …, i_{s-1}} h_{i_1, …, i_{s-1}}$? If so, why does that hold?**\n\nYes, we again replace the expectation with its empirical estimate (again via the Central Limit Theorem):\n \n\\begin{equation}\n\\mathbb{E}h_{i_1,\\ldots i_{s-1}} \\approx \\frac{1}{N^{s-1}} \\sum_{i_1,\\ldots,i_{s-1}}h_{i_1,\\ldots i_{s-1}}\n\\end{equation}\n \nWe have now made this point explicit in the updated manuscript. \n",
" Thank you very much for your clarifications, I better understand the derivation; however, I still have some comments/questions.\n\n- Line 625: Why is $\\mathbb{E} W_{i_1,\\dots,i_{s-1}}=\\frac{1}{N^s}(W^s)_{jp}$? (I'm assuming $N^S$ should be $N^s$).\n- (S20): Are you also assuming $\\mathbb{E}h_{i_1,\\dots,i_{s-1}}=\\frac{1}{N^s}\\sum_{i_1,\\dots,i_{s-1}}h_{i_1,\\dots,i_{s-1}}$? If so, why does that hold?\n\nOverall, I think this paper is of interest to the NeurIPS community, so I am raising my score 1 point. On the other hand, I think this paper still has room for improvement in terms of clarify of presentation though I sympathize with the authors since it is difficult to simultaneously be as mathematically careful as possible and make strong connections to physiology, all in 9 pages.",
" We are very grateful to this reviewer for careful reading of our response and taking that into consideration to revise their score. \n \nWe could not be sure whether the subscores are already revised so if this reviewer does not mind, we would like to bring the 'contribution' subscore to this reviewer's attention, which currently sits at \"1-poor\".",
" Thank you for the thorough response. With the new organization of the paper, I can better appreciate the context and justification for the various approximations. I am still not entirely comfortable with the overall writing and organization of the article. I still believe that the work represents a succession of not as significant contributions as the authors claim that, added together, should be of substantial novelty. \n\nI am nonetheless happy to increase the score a little. ",
" We are extremely grateful for the reviewer's additional specific suggestions to improve the clarity of the paper. We apologize for anything that is unclear. \n\n**What are the mathematical definitions of $\\partial$ and $\\text{d}$? You provide a brief description and a pointer to Ref. [5]. However, the derivation of your algorithm is rather mathematically involved and the derivatives are central to the derivation, so it would be useful to have precise mathematical definitions of these derivatives in the appendix (rather than pointers to another reference).**\n\nWe certainly agree with the reviewer that this point should be clarified more in the paper, as it is central to our derivation. We have added the following explanation to **Notation for Derivatives** in Appendix A.2: *“Without loss of generality, consider a function $f(x,y)$, where $y$ itself may depend on $x$. The partial derivative $\\partial$ of $f$ considers $y$ as a constant, and evaluates as $\\frac{\\partial f(x,y)}{\\partial x}$. The total derivative $d$, on the other hand, takes indirect dependencies into account and evaluates as $\\frac{d f(x,y)}{d x} = \\frac{\\partial f (x,y)}{\\partial x} + \\frac{\\partial f (x,y)}{\\partial y} \\frac{\\partial y}{\\partial x}$.”* We also added an explanation of how this results in very different computations for $\\frac{\\partial s_{p,t}}{\\partial W_{pq}}$ vs $\\frac{d f(x,y)}{d x}$. \n \n**Thank you for clarifying your justification for assuming uncorrelated weights and activities. Can you be more mathematically precise about how approximation (a) follows from assuming that $W$ and $h$ chains are uncorrelated and the central limit theorem applies. What is the central limit approximation you are making? What does it mean for the \"$W$ and $h$ chains to be uncorrelated\"?**\n\nWe thank the reviewer for pointing out this omission. We define a $W$-chain (of length $l$) as\n\\begin{equation}\n\\prod_{\\phi=1}^{l} W_{i_\\phi i_{\\phi+1}},\n\\end{equation} \n \nfor any indices $i_1,\\ldots,i_{l+1} \\in \\{1, ..., N\\}$. Similarly, we define an $h$-chain (of length $l’$) as\n\\begin{equation}\n\\prod_{\\theta=1}^{l'} h_{j_\\theta, t-\\theta},\n\\end{equation} \n \nfor any indices $j_1,\\ldots,j_{l’} \\in \\{1, ..., N\\}$. With these definitions, we call the $W$-chain $W_{i_1,\\ldots,i_{s-1}} = W_{j i_1} W_{i_1 i_2} \\ldots W_{i_{s-1} p}$ and the $h$-chain $h_{i_1,\\ldots,i_{s-1}} = h_{i_1, t-1} \\ldots h_{i_{s-1},t-s+1}$ uncorrelated if\n\n\\begin{equation}\n\\mathbb{E} [W_{i_1,\\ldots,i_{s-1}} h_{i_1,\\ldots,i_{s-1}}] = \\mathbb{E} [W_{i_1,\\ldots,i_{s-1}}] \\mathbb{E} [h_{i_1,\\ldots,i_{s-1}}], \\text{where the expectation is over $i_1,\\ldots,i_{s-1}$.}\n\\end{equation} \n \nConsidering $W_{i_1,\\ldots,i_{s-1}}$ and $h_{i_1,\\ldots,i_{s-1}}$ as random i.i.d. samples indexed by $i_1,\\ldots,i_{s-1}$, the central limit theorem states that\n\\begin{equation}\n\\sum_{i_1,\\ldots,i_{s-1}} W_{i_1,\\ldots,i_{s-1}} h_{i_1,\\ldots,i_{s-1}} \\sim \\mathcal{N}(N^S \\mathbb{E}[W_{i_1,\\ldots,i_{s-1}} h_{i_1,\\ldots,i_{s-1}}], N^S \\text{Var}(W_{i_1,\\ldots,i_{s-1}} h_{i_1,\\ldots,i_{s-1}}))\n\\end{equation} \n \nas the sum tends to infinity. Here, we simply use the i.i.d. assumption even though stronger version of the Central Limit Theorem need weaker assumptions than i.i.d. When the $W$- and $h$-chains are uncorrelated, we take the mean of this distribution as a point estimate (note, however, the growing variance) to arrive at the following approximation:\n\\begin{equation}\n\\sum_{i_1,\\ldots,i_{s-1}} W_{i_1,\\ldots,i_{s-1}} h_{i_1,\\ldots,i_{s-1}} \\approx N^S \\mathbb{E}W_{i_1,\\ldots,i_{s-1}} h_{i_1,\\ldots,i_{s-1}} = N^S \\mathbb{E}W_{i_1,\\ldots,i_{s-1}} \\mathbb{E} h_{i_1,\\ldots,i_{s-1}}.\n\\end{equation} \n \nSince $\\mathbb{E}W_{i_1,\\ldots,i_{s-1}} = \\frac{1}{N^S} (W^s)_{jp}$ when $i_1,\\ldots,i_{s-1}$ are distributed uniformly over valid index ranges, we conclude that \n\n\\begin{equation}\n\\sum_{i_1,\\ldots,i_{s-1}} W_{i_1,\\ldots,i_{s-1}} h_{i_1,\\ldots,i_{s-1}} \\approx (W^s)_{jp} \\frac{1}{N^S} \\sum_{i_1,\\ldots,i_{s-1}} h_{i_1,\\ldots,i_{s-1}}.\n\\end{equation} \n\nWe have also added the above explanation beneath equation (S14) in the updated manuscript.",
" **Line 613: In the ReLU setting, why is the total number of different activation chain combinations equal to the product of the number of activations at each time step? Maybe I am missing something obvious.**\n\nAs for number of activation chain combinations, based on the definition of h-chain in the response above, we are essentially choosing a chain of numbers: at each time step, the number we choose correspond to the index of an activated neuron, and the number of indices we can choose from correspond to the number of activated neurons. Thus, the number of activation chain combinations is equal to the product of the number of activations by analogy to the previous paragraph. \n\nWe are sorry this was not clear in our initial submission, and we hope our more precise definition of h-chain in the response above can help with the clarification. We also added a sentence in the updated manuscript to clarify the reason. \n \n**Should $N_s$ in Eqs (S13) and (S15) be $N^s$?**\n\nWe thank the reviewer for catching these typos and we have fixed it in the updated manuscript. \n \n**The work by Pogodin and Latham [1] seems similar in spirit to your work. They consider a 3-factor learning rule in a deep-network that avoids backprop by using a layer specific modulation of the updates.**\n\nWe thank the reviewer for bringing up this very interesting work that addresses the important direction of biologically plausible alternatives to backpropagation (in deep feedforward networks). Indeed, this work seems to fit very nicely in our discussion on 3-factor learning and neuromodulation in Related Works, so we have added [1] to our citation in the updated manuscript. \n\n[1] Pogodin, Roman, and Peter Latham. \"Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks.\" Advances in Neural Information Processing Systems 33 (2020): 7296-7307.\n",
" I am still having trouble following some steps in the derivation and could use more mathematical hand holding.\n\n- What are the mathematical definitions of $\\partial$ and $\\text{d}$? You provide a brief description and a pointer to Ref. [5]. However, the derivation of your algorithm is rather mathematically involved and the derivatives are central to the derivation, so it would be useful to have precise mathematical definitions of these derivatives in the appendix (rather than pointers to another reference).\n\n- Thank you for clarifying your justification for assuming uncorrelated weights and activities. Can you be more mathematically precise about how approximation (a) follows from assuming that $W$ and $h$ chains are uncorrelated and the central limit theorem applies. What is the central limit approximation you are making? What does it mean for the \"$W$ and $h$ chains to be uncorrelated\"?\n\n- Line 613: In the ReLU setting, why is the total number of different activation chain combinations equal to the product of the number of activations at each time step? Maybe I am missing something obvious.\n\n- Should $N_s$ in Eqs (S13) and (S15) be $N^s$?",
" Thank you for your response. I am currently checking your revision; however, in the meantime, I have another comment:\n\n- The work by Pogodin and Latham [1] seems similar is spirit to your work. They consider a 3-factor learning rule in a deep-network that avoids backprop by using a layer specific modulation of the updates. Sorry I didn't recall this paper in my initial response.\n\n[1] Pogodin, Roman, and Peter Latham. \"Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks.\" Advances in Neural Information Processing Systems 33 (2020): 7296-7307.",
" We thank the reviewers for their time, insightful summaries and contextualization of our work, and constructive feedback toward its improvement. Through changes (also completely described below) made in our uploaded revision, we believe we managed to address all the concerns raised by the reviewers, and bring their points to light for readers as well. In particular, we have now added two new requested experiments (Appendix Figures S3 and S4) — one increasing the difficulty of an existing task and the other a completely new experiment (the ‘copy task’) — both of which address specific reviewer concerns and support and extend our original conclusions. The changes to the manuscript are colored in blue in the present version for easy referencing. Overall, we believe that these changes have significantly improved both the content and presentation of our submission and appreciate the reviewers’ ideas and time in inspiring them.",
" We thank the reviewer for their time, and for mentioning that the paper is well written and addresses an important question. Below we address all the attendant issues and concerns raised to the best of our understanding.\n\n**A patch of approximations**: In carefully reviewing our paper, we definitely appreciate the reviewer’s concern and perspective, and see how the presentation could have led to the impression that the approximations lead the algorithm rather than the other way around. We have now added a sentence before Approximation 1 (Eq. 3) stating that the mathematics of neuromodulatory broadcast leads to the approximations. We would like to emphasize that the overriding framework proposed here is *communicating the credit information via cell-type-specific neuromodulators and processing it at the receiving cells via pre-determined temporal filtering taps*. The implementation of this framework utilizes mean-field type approximations, whose justification comes from known neurobiology constraints: they are needed so that our model can abide by firmly established neurobiology constraints (e.g., locality, causality, Dale’s Law). We would like to remark that our approach — starting from the exact gradient and then introducing approximations that would lead to a biologically plausible learning rule — is a common practice in the area of biologically plausible learning rule (e.g. see [4-6] in the manuscript). In particular, a central contribution of this paper, in addressing our goal, is to understand how cell-type-specific neuromodulation could contribute to temporal credit assignment, and to model such mechanism, cell-type-approximations were made due to the cell-type-specificity of neuromodulation [Smith et al., 2019].\n\nRelated to the comment below, we have now added explanations offering mechanistic intuition of why the approximation works on top of the full Theorem proof in Appendix. We hope that the improved treatment of the approximations and how they are driven by a mechanistic theory will address the reviewer’s understandable concern.\n\n**“The arguments for each approximation are mostly left in the appendix, which is the core of the paper. I would have wanted to be shown in the paper why they are possible and not have to trust the authors and move on. ”**: \n\nWe understand the need to more explicitly give arguments in the main text. Unfortunately, we had to defer proofs of the formal statements to the appendix due to space limitations, which is common practice in NeurIPS. The reader does not have to trust the authors and can check the proofs and other supplementary material in the appendix. To address the reviewer’s concern, we have now added explanations offering mechanistic intuition for these approximations in the main text (Section 3.2). \n\n**“I observe mainly the discrepancy between the claims of the paper and what is actually shown in the paper.”**: \n\nWe respectfully disagree. If the reviewer could explicitly state the discrepancies between claims vs what is shown, we will be happy to address those issues. We would also like to point out that in response to other reviewer’s comments, we have now expanded the delayed XOR experiment to study the effect of the delay length and included a new task (“copy task”) to demonstrate the superiority of ModProp over existing biological plausible learning rules (in Appendix B). We hope that this expanded set of experimental evidence together with the newly added explanations of the theoretical results will address the reviewer’s concern.\n\n**Please continue to the comment below for part (2/2) of our response.**\n",
" **“Any predictions made that could make us believe it is at all plausible… How do you propose to present the main justification of the important assumptions that make the model plausible?”**\n\nFor justifying the assumptions that make the model biologically plausible (i.e. not violating any known biological constraints), we would like to emphasize that biological plausibility is a guiding principle in developing our model: all of our assumptions are based either on firmly established constraints of neurobiology such as the locality of synaptic transmission, causality, and Dale’s Law, or on emerging evidence from large-scale datasets, such as the cell-type-specificity of certain neuromodulators [Smith et al., 2019] or the hierarchical organization of cell types [Tasic et al., 2018]. To address the reviewer’s concern, we edited the manuscript to emphasize these neurobiological constraints and evidence. \n\n**What predictions can be made based on your model of the modulating candidates? **\n\nWe thank the reviewer for the excellent and important question. \n\nFigure 1 caption states that *“... predicts that the modulatory signal each neuron receives can represent a filtered credit signal regarding how its past firing (arbitrary steps back) contributes to the task outcome.”* Our empirical experiments (i.e., learning curves, alignment angle experiment) and theoretical development provide in-silico evidence for this prediction to be validated in future neurobiological experiments. A further prediction of our model is that the neuromodulation acts on its recipients (to signal credit information) in a synapse-type-specific manner. This is due to $\\alpha,\\beta$ dependence of F in Eq. 6.\n\nWe would also like to suggest the outline of an experiment that can test some of the model’s predictions on a family of signaling molecules (e.g., neuropeptides): physiology of multiple individual cells can be monitored in modern neurobiology experiments. Blocking the peptidergic receptors of the neurons that are involved with learning a task and comparing the performance to that without blocking can provide a strong test for the role of peptidergic communication. By changing the task parameters, one could also estimate the temporal extent of credit assignment enabled by peptidergic signaling. In response to the reviewer’s great question, we have now added a discussion on experimental prediction also to the first paragraph in Discussion. \n\nWe would also like to remark that equations derived in our work offer potentially testable predictions, since we assign putative identities and mechanisms to the terms that appear in our equations. (e.g., cell-type-specific local neuromodulator, global neuromodulator, activity of cell of interest, activity of neighboring cell, filtering at the postsynaptic site). \n\n**References:**\n\n[Smith et al] \"Single-cell transcriptomic evidence for dense intracortical neuropeptide networks,\" elife, 2019.\n\n[Tasic et al., 2018] \"Shared and distinct transcriptomic cell types across neocortical areas,\" Nature, 2018.\n\n",
" We thank the reviewer for the supportive comments on our method and for mentioning that it suggests potentially testable predictions about the role of neuromodulators. We also thank the reviewer for highlighting the importance of the problem and the challenge it represents in the field. \n\n**Clarity of technical arguments**: We thank the reviewer for multiple specific suggestions to significantly improve the presentation. We have tried to address all of them. Specifically, in addition to the answers to specific questions below, we have now revised the manuscript, where we\n\n- Made equation (1) and (S4) (previously (10)) the same for consistency \n\n- Added a brief explanation for the ∂ vs d notation in the beginning of Appendix A.2 (due to space limit) and alerted the reader to that in the main text after the first appearance of the notation. Briefly, ∂ denotes direct dependency and d accounts for all (direct and indirect) dependencies, following the notation in [5]. \n\n- In relation to the point above, fixed the typos in (3), $e_{pq,t}$ and (S5) (formerly (11)): changed d to ∂ as the reviewer correctly pointed out. Also, the reviewer’s understanding is correct that $h_{j,t}$ is simply the derivative of the ReLU function evaluated at $s_{j,t}$. \n\n- Removed the forward equation reference in line 134\n\n- Made another mention of z=ReLU(s) beneath (3). As a side note, we mentioned the use of ReLU activation in Discussion in our initial submission. \n\n**Please continue to the comment below for part (2/2) of our response.**",
" **Comparison with RTRL instead of BPTT**: We agree that this point needed further clarification and appreciate the reviewer’s concern. RTRL and BPTT both compute the exact gradient. They differ only in how they parse this computation. Therefore, they should be identical in terms of performance. The reason we chose BPTT here is that RTRL is prohibitively expensive to compute (O(N^3) memory and O(N^4) computation costs). Thus, our choice merely reflects a cost-cutting measure without affecting the results. To address the reviewer’s understandable concern, we have now emphasized this point in the manuscript.\n\n**Relationship to Ref. [6]**: We would like to emphasize that while Ref. [6] introduced the notion of learning with cell-type-specific neuromodulation, their algorithm only addresses the contributions to the task error of neurons that are at most 2 synapses away. Our manuscript shows a way of removing this significant limitation and enables the communication and calculation of credit from neurons that can be arbitrarily many synapses away. It also experimentally demonstrates the benefit of this key contribution. To address the reviewer’s concern, we have carefully revised the manuscript to pinpoint the mechanistic differences between our work and Ref. [6]. (e.g., Ref. [6] can be considered as a special case of our work, where the filter length is constrained to a single tap.) We have also added Appendix C (due to space limit) to expand the discussion on Ref. [6] and other cited works such as KeRNL and SnAP, and alerted the reader to that section in Related Works.\n\n**Line 143, uncorrelated activities and weights**: The reviewer is correct that neuronal activity and synaptic weights are not uncorrelated, strictly speaking. On the other hand, considering that a single neuron may have thousands of synaptic partners, the activity of the neuron or its time derivative is at best weakly correlated to any one synaptic weight. (e.g., the “trial-to-trial variability” in controlled experiments could perhaps be considered as an example: the firing of individual neurons can demonstrate significant variability under identical experimental conditions even though the synaptic weights are supposed to remain essentially the same.) We take advantage of this phenomenon in our model and ignore these weak correlations. To address the reviewer’s concern, we have now revised the manuscript to spell out this reasoning.\n\n**Line 568, uncorrelatedness of W and h chains**: Since W refers to the synaptic weight and h refers to the derivative of the activity, we would like to refer to the uncorrelatedness argument above for activities and weights. The reviewer is correct in pointing that analysis of nonlinear networks (e.g., with ReLU activations) can be hard. (This is a key reason for the ever-increasing analysis of linear networks in the field.) On the other hand, as mentioned above, neuroscience experiments offer empirical evidence: nonlinearity is a hallmark of neuronal computation. Yet, neuronal activity is at best weakly correlated to the strength of any one synapse, due, in part, to the involvement of many synapses. \n\nAs for how such assumption of uncorrelatedness as well as stationarity of activity (Approximation 1) affects learning performance, we empirically demonstrated in Figure 3 that little performance degradation due to such approximation is seen for the studied neuroscience-motivated tasks. On the other hand, as discussed in our submission, such approximation restricts the spatiotemporal precision of the credit signal, making ModProp struggle with tasks that require precise input integration, e.g. sequential MNIST. As we argued, sequential MNIST is a task that would also be difficult for the brain to solve. \n",
" We thank the reviewer for supportive comments on the importance of the problem and the potential promise of our approach, and agree with their highlighting of the low-cost aspect of the proposed ModProp method.\n\n**Somewhat weak experimental evaluation**: We appreciate this point and agree that further experimental validation was called for to strengthen our paper. In our revision, we have added further experiments that address the reviewer request in two ways:\n\n- We have taken this reviewer’s suggestion and implemented the copy task with different sequence lengths. We have also studied the effect of random modulatory weights, as suggested by the reviewer. We edited the manuscript to include these experiments (Appendix Figure S4 and referenced in the main text). Briefly, ModProp outperforms other biologically plausible alternatives for the copy task at different sequence lengths, despite using random modulatory weights. \n\n- We performed additional experiments with the delayed XOR task by increasing the delay duration, which increases the difficulty of the underlying task. We edited the manuscript to include these experiments (Appendix Figure S3). Briefly, these results suggest that at an increased delay period, ModProp still outperforms other biologically plausible rules for RNNs.\n\nOverall, these two experiments further strengthen our basic conclusions that ModProp can outperform other biologically plausible alternatives across a range of RNN tasks with varying difficulties. \n\n**Presentation of Section 3.1**: We thank the reviewer for pointing this out. In response, in our revision we improved the presentation of this section by: (1) fixing the equation referencing issue the reviewer pointed out; (2) adding a brief explanation for the biological implementation and referencing Appendix D.1 for greater detail; (3) added bolded subheadings for navigation in Section 3.1. \n\n**KeRNL/SnAp discussion**: We thank the reviewer for the great suggestion. We have now added a detailed discussion of these algorithms, pointing out the main differences of our work from these methods in Appendix C. In Related Works in the main text, we have carefully alerted the reader the contents of that Appendix section. \n\n**Efficiency of different algorithms**: We agree with the reviewer’s comment. To address this concern, we have now expanded upon the existing discussion and include a new comparative paragraph on the complexities of the SnAP, e-prop, and RFLO algorithms at the end of Appendix D.\n\n**Implementation of matrix powers**: Online implementation of matrix powers is indeed costly and not biologically plausible. Our theory suggests that this computation can be accurately approximated without such online calculations when neurons within a cell type have similar synaptic connectivity. In this case, the matrix powers of weight averages can be pre-calculated. In biological terms, noting that the entries of the matrix powers are used as the values of different filter taps, these can be genetically encoded as part of the cell type identity and optimized over evolutionary time scales. Thus, while the individual needs to tune the synaptic weights in an experience-dependent manner, the modulatory mechanism (and the corresponding matrix powers) do not need to be updated with a similar frequency (as backed by the superior performance of even the random fixed modulatory weights in our simulations). We finally note that such filtering mechanisms are ubiquitous in both the cell and the synapse. We thank the reviewer for the question and we have now added aspects of the above discussion to the main text (in Section 3) to clarify this point.\n\n**Determining cell types**: We think this is a great question. We had to remove a relevant discussion in the original submission due to space limitations. Multiple studies suggest that cells of the same type demonstrate consistent properties across a wide range of features, including synaptic connectivity, modulatory connectivity, molecular identity, in-vivo activity, morphology, and intrinsic electrophysiology (e.g., synaptic time constants) [Campagnola et al., Smith et al, Bugeon et al., Schneider et al., Gouwens et al., Gala et al.]. Therefore, our understanding is that one would end up with similar groupings of cells, somewhat independent of the particular grouping criteria. Here, we use two cell types with consistent wiring and type of connectivity (i.e., capturing the main excitatory/inhibitory division as well as consistent synaptic and modulatory connectivity) in our relatively simple models. To address the reviewer’s question, we have now incorporated aspects of this discussion into the main text (in Section 3).",
" **References:**\n\n[Campagnola et al] \"Local connectivity and synaptic dynamics in mouse and human neocortex,\" Science, 2022.\n\n[Smith et al] \"Single-cell transcriptomic evidence for dense intracortical neuropeptide networks,\" elife, 2019.\n\n[Bugeon et al] \"A transcriptomic axis predicts state modulation of cortical interneurons,\" Nature, 2022.\n\n[Schneider et al] \"Transcriptomic cell type structures in vivo neuronal activity across multiple time scales,\" bioRxiv, 2022.\n\n[Gouwens et al] \"Integrated morphoelectric and transcriptomic classification of cortical GABAergic cells,\" Cell, 2020.\n\n[Gala et al] \"Consistent cross-modal identification of cortical neurons with coupled autoencoders,\" Nature computational science, 2021.",
" We thank the reviewer for supportive comments on the importance of the problem and the potential promise of our approach.\n\n**Very long-term credit assignment**: The reviewer is correct in observing that the influence of distant events in time is small. (As the reviewer also mentions, BPTT suffers from a similar problem and BPTT implementations typically put a hard limit on the temporal window size.) This phenomenon may be related to well-known distinctions in biology between short- and long-term memory mechanisms. The circuits implemented in our manuscript are perhaps best seen as counterparts to short-term (working) memory. If these circuits are complemented by a distinct long-term memory architecture (e.g., mimicking the hypothesized roles of brain structures such as the hippocampus and the entorhinal cortex. Also see ref. [59] in the manuscript.), it may become possible to perform efficient credit assignment with ModProp for events distant in time as well.\n\n**Referring to supp. material**: Thank you for this suggestion, which we have implemented as suggested -- we now both explicitly refer to the Appendix and use a separate numbering system for supplemental equations and figures. (e.g., Appendix Eq. S14)\n\n**Delayed XOR task with different delays**: We appreciate this important question -- and as the reviewer requested, we have now performed an additional experiment with a longer time delay in the delayed XOR task. We have edited the manuscript to report these experiments (new Appendix Figure S3 and reference in the main text). Briefly, these new results suggest that at an increased delay period, ModProp still outperforms other biologically plausible rules for RNNs. At the same time, its learning performance degrades compared to its performance at a shorter delay period. We believe this observation, now included in the paper, strengthens the paper by reinforcing the reviewer's important point on vanishing gradients as well as the discussion point on short-term vs long-term memory mechanisms which the reviewer also raised.",
" In this paper, the authors propose a biologically-plausible temporal credit assignment rule for recurrent neural networks called ModProp. This work is motivated by recent experimental evidence on the presence of local neuromodulatory networks in the brain. Here, they propose that these synapse-type-specific local modulatory signals are received via low-pass filtering of eligibility traces at post-synaptic neurons. \nUsing the framework of discrete-time rate-based RNNs, they derive the ModProp synaptic weight update rule and describe its properties. They also provide simulation results comparing the performance of ModProp vs. other biologically-plausible learning rules and Backprop through time on three temporal processing tasks. ModProp outperforms other previously proposed learning approaches, indicating that it is a promising candidate for understanding how the biological networks perform temporal credit assignment. \n\n This paper addresses the important question of how credit assignment might occur in biological recurrently connected neural networks. Here they explore the potential role of local neuromodulatory signaling mechanisms and propose the ModProp learning rule. The idea of synapse-type-specific local modulatory signaling complementing Hebbian learning and global neuromodulation does seem like a viable theory. Backed by the empirical results presented here, I think ModProp is a promising candidate for biologically plausible learning in recurrent networks and warrants further exploration. \n\nAlthough ModProp can technically perform temporal credit assignment over arbitrarily long durations, the influence of distant events in time is negligible. This is akin to the problem of vanishing gradients in backpropagation through time, which makes ModProp ill-suited for very long-term credit assignment. The authors do mention this drawback in the discussion section.\n\nI think this paper is quite constrained by the page limit. Despite this constraint, I believe the authors have done a reasonably good job of describing the important concepts involved in the proposed learning rule. \nMinor suggestions: the main paper refers to certain equations (e.g. 14) and figures that are in the supplementary material. It might be helpful for the readers to point out that this content is in the methods/supplementary material. \n - Have you tried the delayed XOR task with different delays between the cues? How would the performance of ModProp vary as a function of this delay?\n The authors adequately address the limitations and the potential long-term societal impact of their work in the discussion section. ",
" Evaluating the parameter gradients of recurrent neural networks forward in time ([17]) is both costly and not biologically plausible. This paper introduces novel approximations to the synaptic weight gradients of recurrent neural networks, which can be evaluated forward in time, using procedures that are both less costly and more biologically plausible than full RTRL. The authors discuss possible biological implementations based on plasticity modulation and compare the performance of their method to relevant baselines in a number of experiments. Finding biologically plausible learning rules which improve upon e-prop/RFLO is a very important research topic. The decomposition of the gradient and approximations proposed in this paper are a sensible approach forward, similar in spirit (but different from) recent algorithms such as SnAp and KeRNL.\n\nMy major concern with the current paper is the somewhat weak experimental evaluation. I leave some additional comments for the authors below, on points where I think the paper could be improved:\n\n- The presentation of section 3.1 could be improved. In particular the proposed biological implementation (an essential point of the present work) should be clarified and discussed in greather depth. [Small additional note: there is a problem with equation references; for example, in line 140, Eq. 14 (supplemental material) is referenced instead of Eq. 2.]\n\n- Weakness of experimental part of the paper: the paper would substantially improve if the authors demonstrated the superiority (and the cases where the different approximations fail) of modprop over e-prop in benchmarks that are not as toyish as pattern generation and delayed XOR, but perhaps not as difficult as sequential MNIST. Perhaps the copy task (with varying sequence length) is a good additional synthetic benchmark? In particular, it would be very good to assess how well the update with random modulatory weights performs in more difficult problems. Given that the focus of this work is on introducing a new approximation to the full gradient on RNNs, the somewhat weak experimental study is the major current weakness of the paper.\n\n- Given that there is a section on efficient computer implementations, a small application (or at least a discussion) to LSTM-like models and a comparison to algorithms such as SnAp (which also reduce the bias compared to e-prop/RLFO) would also enrichen the paper and help understand how well the approximations introduced here impact performance, as well as show their potential usefulness in machine learning, beyond their biological plausibility merits. - How would the matrix power which appears in the weight update be calculated in a biological network? Even when averaging over connections of the same type, it's unclear to me how plausible this operation is.\n\n- Can the authors discuss in more detail how to approach the problem of determining cell types? For example, could it make sense to group by time constants, when there is heterogeneity in time constants? Can the authors provide some heuristics or intuitions on how they expect such choices to affect their rule?\n\n- Can the authors discuss KeRNL and SnAp in more detail?\n\n- As discussed in the strengths & weaknesses section, can the authors extend their experimental evaluation to more clearly demonstrate the superiority over e-prop/RFLO, including the random modulatory version? Apart from the relatively weak experimental evaluation of the proposed approximations, the authors have done a good job in discussing the limitations of the current work.",
" The authors propose a method of temporal credit assignment method for training recurrent neural networks, which is an approximation of real time recurrent learning. Their method can be implemented in a biologically plausible neural network with neuromodulators. Strengths:\n\nThe authors tackle and challenging and important problem in theoretical neuroscience: biologically plausible training of RNNs (though an area I'm not very familiar with). I find their general approach quite interesting and it appears to suggest (potentially testable) predictions about the role of neuromodulators. Overall, I think it is a useful contribution to the field.\n\nWeakness:\n\nThe derivation of their algorithm is highly technical and requires a lot of notation. I found it challenging to follow their arguments. First, the main text includes a number of forward references to the appendices, and then the equations in the main text and appendices are not aligned. Sometimes this appears due to some approximation that is made, but other times it's not clear to me. I think overall their technical arguments could be presented with more clarity. I also think that the authors could devote more space to explaining the approximations that they make, and when these approximations are valid or not. Numerics:\n\n- Why don't you compare your algorithm with RTRL since it's an approximation of RTRL? Wouldn't that help isolate if the performance gap with BPTT is due to the approximations or because RTRL isn't as good as BPTT?\n\nRelation to biology:\n\n- I think it's worth further discussing the relationship of this work to [6]. Both works use cell-type specific neuromodulation, but the similarities and differences of the biological interpretations are not clearly stated.\n\nDerivation: I think the clarify and precision of the technical arguments can be improved, especially for a novice like me who is not familiar with RTRL.\n\n- Should Eq. (1) and Eq. (10) be the same?\n- Line 129: What are the mathematical definitions of $\\partial$ and $\\text{d}$? One involves the chain rule and the other doesn't?\n- Line 134: There are a number of forward references to equations in the appendix, which makes the paper difficult to read on its own.\n- Eq. (3): Should \"$\\text{d}$\" be \"$\\partial$\"? \n- Is $z=\\text{ReLU}(s)$? It seems like that needs to be true for Eq. 3 to hold. Why not say so in the main text?\n- Line 143: Is it reasonable to assume that neural activities and weights are uncorrelated? Do you have experimental evidence or a reference? My naive impression would be that this is *not* true.\n- Line 149: Based on Eq. (1), should $e_{pq,t}$ be $\\frac{\\text{d}s_{p,t}}{\\text{d}W_{pq}}$? And wouldn't that be $\\frac{\\partial s_{p,t}}{\\partial W_{pq}}$ in this case?\n- Eq. (11): Should $\\frac{\\text{d} z_{j,t}}{\\text{d} s_{j,t}}$ be $\\frac{\\partial z_{j,t}}{\\partial s_{j,t}}$? Is $h_{j,t}$ simply the derivative of the ReLU function evaluated at $s_{j,t}$? \n- Line 568: A more detailed justification for approximation (a) would be greatly appreciated. I see why the approximation is exact in the linear setting. What does it mean for the $W$ and $h$ chains to be uncorrelated? Is that a reasonable assumption in the ReLU setting? The authors make a number of approximations when deriving their algorithm and I am interested to better understand to what extend these approximations affect the performance of their algorithm. In the discussion section, the authors suggest investigating their algorithm in situations where the assumptions are violated.",
" ### UPDATE 08/08/2022: Contribution score increased.\n\n\n### UPDATE 08/05/2022: Score increased slightly after reading the response and updated version of the submission. \n\n\nThis paper proposes a biologically plausible implementation of training recurrent neural networks as a biologically plausible alternative to back-propagation through time and already existing less plausible algorithms. This works lays out the approximation necessary for making the model believable as to what type of cell could be implementing it. The paper also proposes some experiments to valide the model. The paper is well-written, and results and theorems are nicely explained using informal explanations and leaving the full proof to the appendix. The paper is somewhat novel and addresses an important question, which is often less addressed than that of backprop in non-recurrent networks. \n\nUnfortunately, the \"framework\" proposed seems to be more of a patch of approximations, each solving various problems rather than a mathematically grounded approach. The paper offers a long introduction of what are the possible candidates for their algorithms to justify this work, but it is not validated, nor any predictions are made that could make us believe that it is at all plausible. \n\nThe arguments for each approximation are mostly left in the appendix, which is the core of the paper. I would have wanted to be shown in the paper why they are possible and not have to trust the authors and move on. I observe mainly the discrepancy between the claims of the paper and what is actually shown in the paper. \n\n What predictions can be made based on your model of the modulating candidates? \n\nHow do you propose to present the main justification of the important assumptions that make the model plausible? \n\n They have addressed some of the limitations. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
4
] | [
"EzgIWeV72Rp",
"jvJ31NsbWS",
"wiUydbJt9xE",
"EUugZmzV58s",
"6i9x-8ohuvl",
"EdvAz5nCGsV",
"seZcY92TfoF",
"xY7zSa_zSSC",
"Ohv9vEkCu7Z",
"uoQLvw_FJtK",
"elzW55GFmgJ",
"nHsR12K3SJU",
"nHsR12K3SJU",
"4MszX1b-X6L",
"4MszX1b-X6L",
"nips_2022_jPx7vYUNUCt",
"Z8EbupXOFME",
"Z8EbupXOFME",
"gIkEZrrxzBi",
"gIkEZrrxzBi",
"vDUy5jmiO-T",
"vDUy5jmiO-T",
"yR037qddZov",
"nips_2022_jPx7vYUNUCt",
"nips_2022_jPx7vYUNUCt",
"nips_2022_jPx7vYUNUCt",
"nips_2022_jPx7vYUNUCt"
] |
nips_2022_OlGu-BXgJ- | Wasserstein $K$-means for clustering probability distributions | Clustering is an important exploratory data analysis technique to group objects based on their similarity. The widely used $K$-means clustering method relies on some notion of distance to partition data into a fewer number of groups. In the Euclidean space, centroid-based and distance-based formulations of the $K$-means are equivalent. In modern machine learning applications, data often arise as probability distributions and a natural generalization to handle measure-valued data is to use the optimal transport metric. Due to non-negative Alexandrov curvature of the Wasserstein space, barycenters suffer from regularity and non-robustness issues. The peculiar behaviors of Wasserstein barycenters may make the centroid-based formulation fail to represent the within-cluster data points, while the more direct distance-based $K$-means approach and its semidefinite program (SDP) relaxation are capable of recovering the true cluster labels. In the special case of clustering Gaussian distributions, we show that the SDP relaxed Wasserstein $K$-means can achieve exact recovery given the clusters are well-separated under the $2$-Wasserstein metric. Our simulation and real data examples also demonstrate that distance-based $K$-means can achieve better classification performance over the standard centroid-based $K$-means for clustering probability distributions and images. | Accept | This paper provides a Wasserstein-based k-means formulation for clustering probability distributions. Though the overall reception was mildly positive but two reviewers raised their scores following the author feedback. There remains some doubts that there is enough evidence to support all claims in the paper--most relevant, referees have noted to us that fundamentally a strong empirical validation is lacking. We recommend revising the paper and focusing on a strong empirical narrative (without relegating new details to the appendix), together with improving the exposition as outlined by reviewers | train | [
"ASvdpUWTyVy",
"rys1kUGLNs_",
"xXHb2SSYNKQ",
"R4oABfb2SDD1",
"xqqr05hDDuh",
"2O-AUBm9g8q",
"g6I-7EGHfg9",
"vWKvv5lf1hy"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" (__On scalability of our approaches__) The scalability issue is one of our concerns. As we pointed out in Appendix B, we may consider several methods to bring down the time cost e.g. subsampling-based method for K-means and SDP. In the revision, we can observe the time complexity issue for Wasserstein $K$-means methods. Thus, we might consider more possible ways to accelerate the calculation for Wasserstein distance in the future.\n\n\\\n(__On the `$K$-means' terminology__) We follow the convention in the SDP relaxed formulations of the Euclidean $K$-means clustering methods. In the Euclidean case, the distance-based (and its SDP formulation) and centroid-based formulations are equivalent, the SDP formulation is usually called _SDP relaxed $K$-means_. Even though the two formulations are not equivalent in Wasserstein space, we still abide this convention for the reason we have similar SDP structures. \n\n\\\n(__On experiments for real datasets__) Please refer to our response item ``On experiments for real datasets\" to reviewer k4tW.\n\n\\\n(__On the pairwise distances for real dataset__) We use Sinkhorn divergence to calculate the pairwise distances for MNIST the same as Experiment 4.1. Sinkhorn distance approximates the Wasserstein distance fast by adding an entropy term to the original optimal transport. And Sinkhorn divergence is a debiased form of Sinkhorn distance. One may refer to the paper _Sinkhorn Distances: Lightspeed Computation of Optimal Transport_ by Cuturi, and the paper _Learning Generative Models with Sinkhorn Divergence_ by Genevay, Peyré, Cuturi for more details.\n\n\\\n(__On adding discussion session__) We have now added a discussion section in the rebuttal revision, which discussed some limitations and future directions.\n",
" (__On in-depth analysis of the superior behavior for our approaches__) Yes, it is a good point that data imbalance might be the primary reason for bad clustering accuracy. However, we observe that in our counterexample (Example 3) of illustrating the failure of centroid-based Wasserstein $K$-means, the failure is actually not due to the cluster imbalance. In fact, if we always set the number of copies of $\\mu_4$ to be equal to $2m+1$, where $m$ is number of copies of $\\mu_1$ and $\\mu_2$, then by Lemma 4 we know centroid based $K$-means will fail given sufficiently large $m$ while those two clusters are balanced.\n\\\nIn our real data example, the superior behavior of distanced-based Wasserstein $K$-means and its SDP relaxation would be more evident for cases with unbalanced cluster sizes as you observed. However, the core reason should be the variability of probability measures within the true clusters. Intuitively speaking, the larger variability one cluster possesses, the more likely it would be for us to obverse the peculiar behaviors of barycenter-based Wasserstein $K$-means methods since the barycenters are no longer representing the clusters due to their instability or irregularity as we discussed in Section 2.1. In particular, in the MNIST dataset, we find that the within-cluster variability is partly from different orientations of the same digit, which leads to the worse performance of D-WKM.\n\n\\\n(__On complexity analysis of the algorithm__) If you are referring to the theoretical analysis, please find the discussion at the beginning of the next response item ``On experiments for real datasets\", which is now also discussed in Appendix B.\nIf you are referring to the run time analysis for the numerical experiments, we have now added the time cost for the real datasets. For example, from Table 4 we can observe that all three Wasserstein $K$-means approaches have time complexity issues when we enlarge $n$. The nearly quadratic grow of time costs for W-SDP and D-WKM is due to the fact that calculating pairwise distances is the computational bottleneck for sample size of order $10^2$. The large variance for B-WKM for larger sample size from Table 3 is due to the convergence of the algorithm, where the total iterations for B-WKM achieves maximum iteration 100 for 2 replicates out of 10 total replicates. \n\n\\\n(__On experiments for real datasets__) Please refer to our response item ``On experiments for real datasets\" to reviewer k4tW.",
" (__On optimal transport background__)\nWe added some background introduction about optimal transport in Appendix C in the rebuttal revision.\n\n\\\n(__On real data experiments__) \nPlease refer to our response item ``On experiments for real datasets\" to reviewer k4tW.\n\n\\\n(__On more discussions about clustering probability measures literature__) In fact, clustering probability measures is quite a new topic, there have not been many works related to it yet. We adopted your suggestion by adding more discussions on related work for clustering of probability distributions in the rebuttal revision. The concept of clustering general measure-valued data is introduced by Domazakis et al. [2019], where the authors proposed the centroid-based Wasserstein $K$-means algorithm. Verdinelli and Wasserman\n[2019] proposed a modified Wasserstein distance for distribution clustering. And after that, Chazal et al. proposed a method in Clustering of measures via mean measure quantization by first vectorizing the measures in a finite Euclidean space followed by an efficient clustering algorithm such as single-linkage clustering with $L_\\infty$ distance. The vectorization-based methods could improve the computational efficiency but might not be able to handle important aspects, such as shapes and orientations, of probability measures compared to clustering algorithms based the Wasserstein metric. Thus, vectorization-based method might fail for cases where problems are ill-conditioned, e.g., cluster sizes are highly unbalanced, or the within-cluster variability is high.\n\n\\\n(__On comparing with Euclidean $K$-means__)\nWe have added some numerical results based on the Euclidean $K$-means algorithm in the revision for real datasets. \nFrom Table 2, Table 5 and Table 6 we can observe that the Euclidean $K$-means algorithm consistently performs the worst for most cases since the original $K$-means cannot properly handle shapes and orientations of probability measures. For example, small perturbations of shapes and orientations of input images may drastically increase their mutual vectorized-Euclidean distances while will only incur small changes in the Wasserstein distances.\n\n\\\n(__On experiments and theories for non-Gaussian distributions__) We conducted the theoretical investigation of exact recovery for Gaussian distributions since the Wasserstein distance between Gaussians have closed form expressions, and yet they are rich enough to reveal non-trivial theoretical findings. To our knowledge, our results on Gaussians are the first about exact recovery of clustering distributions. We did simulations using Gaussians in order to numerically verify the theorem findings. All real data examples, including the MNIST and other datasets we added in the rebuttal revisions, are not Gaussians, although cluster labels in these examples cannot be exactly recovered. \n",
" (__On irregularity and non-robustness proofs__) Details about the non-robustness of Wasserstein barycenters are provided in Section 2 Example 2, where all calculations are straightforward as described therein. This example illustrates that small perturbation on the input distributions may incur dramatic changes in their barycenter. Our Example 1 illustrating the irregularity of Wasserstein barycenters is from Santambrogio and Wang [2016], where the barycenter of two convex-supported distributions is no longer convex-supported and therefore not preserving the shape; for self-contained purpose, we have added a derivation in Appendix D.1 in the rebuttal revision for the form of barycenter in Example 1. \n\n\\\n(__On experiments for real datasets__) The scalability would be a major issue here and we are only able to focus on the experiments with sample size of the order of $10^2$ to $10^3$ in the current work. As we pointed out in Appendix B, we may consider several methods to bring down the time cost e.g. subsampling-based method for $K$-means and SDP, which however is beyond the scope of the current work. We might consider more possible ways to accelerate the calculation for Wasserstein distance in the future.\nMore specifically, the time cost of the standard and most state-of-the-art Sinkhorn algorithms for calculating optimal transport is of the order $O(g^4/\\epsilon^2)$, and for calculating Wasserstein barycenters is $O(ng^4/\\epsilon^2)$, where $n$ is the sample size, $\\epsilon$ is the numerical accuracy, and $g^2$ is the discretization size for a $g$-by-$g$ image, e.g. $g=28$ for MNIST datasets. Thus, it costs time of order $O(10^{16})$ to calculate the Wasserstein distance between two pictures with scale $100\\times 100$ to achieve $10^{-4}$ accuracy. In practice, it takes 3.995 seconds to calculate the Wasserstein distance between two pictures with scale $36\\times 36$ using Sinkhorn divergence in our settings, which indicates that we need to wait at least 11 hours to get a single result from the barycenter-based Wasserstein $K$-means (B-WKM) with $10$ total iterations and sample size $n=1000$ apart from the time cost for calculating barycenters. Moreover, it takes at least 45 days to get one result by distance-based $K$-means approach (D-WKM) or its semidefinite program relaxation (W-SDP) from the same settings.\n\\\nAs for the cluster numbers, our simple experiment with two clusters already shows the inferiority of the barycenter based B-WKM algorithm, which we feel already provide strong numerical evidence to backup our theoretical arguments; the inferior behaviors for B-WKM compared to D-WKM and W-SDP are expected to be more evident as the problem becomes more complicated. However, we need the sample size $n$ to be of the order at least $10^3$ to conduct convincing experiments with more clusters, which may not be accessible for now. On the other hand, the goal of our article is not trying to provide a more efficient way to cluster probability measures but to observe and analyze the peculiar behaviors of Wasserstein barycenters and propose the D-WKW and its semidefinite program relaxation W-SDP. \n\\\nNevertheless, we did some preliminary works to hopefully address your concerns partially. We have now included more experiments with two new real-world datasets, Fashion-MNIST and USPS handwriting digits, in the rebuttal revision. Both new experiments involve three clusters with unbalanced cluster sizes. We can observe similar patterns on the performance of different methods (Table 5 and Table 6) as that of the MNIST dataset. As we can see, the mis-classification error for the two distance based methods, W-SDP and D-WKM, are consistently and strictly better than the B-WKM by a fairly large margin. From Table 2 in the revision, we can observe that the results with doubled sample size are quite close to the results from the original experiment. Therefore, we can expect behaviors for Wasserstein $K$-means approaches to be stable and consistent as sample size grows.\n\\\nFinally, we shall emphasize that the reason why we use error rate or mis-classification error as our clustering criterion is its convenience to interpret. The mis-classification error is the proportion of the data that are assigned to the wrong clusters; so there is no need to consider the imbalance of the clusters as the way for $F_1$ score with more than $2$ clusters. Nevertheless, we shall adopt your suggestions and add the average weighted $F_1$ score later. Since the time for rebuttal session is limited, we give the results for case 1 in Table 3 in the revision, where we can see consistent patterns between $F_1$ scores and error rates from Table 2. We will add the $F_1$ score as another criterion for the rest real datasets experiments in the future version.\n\n\\\n(__On adding conclusion section__) We have now added a discussion section including the conclusion in the revision.",
" In this paper, the authors provided the pitfalls of Wasserstein K-means via giving the examples and theoretical proof, and proposed a distance-based formulation of K-means in the Wasserstein space. In addition, the authors conduct a set of experiments on real and simulation data examples to evaluate the proposal. The main contributions are:\n1. The authors analyze the irregularity and non-robustness of barycenter-based formulation of K-means theoretically and empirically.\n2. To escape the pitfalls of the Wasserstein space, the authors generalize the distance-based formulation of K-means to the Wasserstein space.\n Strengths:\n1. The proposed problem is worth exploring\n2. The research is meaningful \nWeaknesses:\n1. In this paper, the authors claim that they provide evidence for pitfalls of barycenter-based Wasserstein K-means, but I cannot find the details of proof for the irregularity and non-robustness.\n2. Experiments are not enough. My main concern is the experiments in this paper. The authors conduct a set of experiments on real and simulation data examples to evaluate the proposal. However, the authors only conduct experiment on one real-world dataset and use error rate to evaluate the proposal. Using more datasets, and considering the imbalanced datasets, adding the F1 score can be more convincing. \n 1. Please further add the experiments of this paper.\n2. Please include the detailed proof of the pitfalls. \n3. The conclusion section may be missing. Please include this section in the end of this paper.\n In this paper, the authors only consider the two categories problem, and conduct the experiments on the real-world dataset. The multiple categories problem also should be considered. ",
" The paper uses k-means like algorithm to cluster probability measures using 2- Wasserstein metric. k-means clustering can be formulated in two ways 1) Centroid Based and 2) Distance based . Both formulations are equivalent in the Euclidean space however in the Wasserstein space the same is not true. The authors show that in the Wasserstein space centroid based k-means is much inferior to the distance based version due to irregularity and non-robustness. This is shown using well crafted examples. The authors empirically demonstrate the superiority of the distance based version. In addition the authors also give an SDP relaxation of the distance based k-means in Wasserstein space and show both theoretically and empirically that when the clusters are well separated , the SDP relaxation formulation can recover the clusters exactly with high probability. Strengths:\n1) The paper appears technically sound. I did not check the proofs but the main theorem and other results appear correct.\n2) Clustering is a very important problem in Machine learning and hence extending a popular algorithm like k-means to another non-Euclidean metric space, backed with a nice theoretical result will be of interest to the community.\n3) The use of examples with accompanied figures to highlight the important results is appreciable and aids understanding.\n\nWeaknesses:\n1) The paper is difficult to follow for a person not having background knowledge of Wasserstein space. For example terms like Alexandrov curvature, optimal transport map etc. are used without giving additional information or definition. I believe adding a bit more background information and details of notations used will go a long way in improving the readability of the paper for broader audience. \n2) Experiments are done on very small data and also only for k values 2 and 4. More experiments with different values of k and larger datasets (may be more clusters from MNIST ) will strengthen the paper.\n3) Related work for clustering of probability distributions is discussed very briefly. It would be good if the authors can cite works on clustering probability distributions in general (if available) and compare and contrast the Wasserstein k-means method. 1)Did you compare the empirical results with Euclidean k-means algorithm as baseline, used on the probability distribution vectors?\n2) Do you try some experiments with distributions other than Gaussian? Are there any results on exact recovery for other probability measures? see the above sections",
" Paper studies the behavior of two formulations of Wasserstein-K-means: i) a centroid based ii) a distance based formulation. It shows that the centroid-based formulation can have some unexpected behaviors and illustrates the phenomenon on several toy examples. A SDP formulation of the distance based formulation is then proposed in order to speed up the computations. Experiments on simulated data are given, together with a simple real-data application on a MNIST clustering scenario. Strengths of the paper :\n- the paper highlight an interesting behavior of the Wasserstein-k-means, advocating for the use of the distance-based formulation rather than the centroid based one (exhibiting some cases in which they lead to different solutions in the Wasserstein space). To my knowledge, this analysis is new. \n- Illustrations are provided, allowing better understanding the behavior in some specific cases.\n\nWeaknesses:\n- The paper advocates the use of the distance-based formulation but the scenarii that are considered seem rather limited. An in-depth analysis of the cases in which the difference appears and matters is lacking. For instance, it seems that it is related to « unbalanced » scenario, that is to say when the different cluster sizes are different. This behavior should be better investigated. \n- the complexity analysis of the algorithm is not provided. \n- The experimental section is rather limited, with one simple illustration on a MNIST clustering (digit 0 vs. digit 5) provided. \n See above As far as I can see, there is no potential negative societal impact of the work. ",
" The paper performs studies on three different implementations of K-means clustering for probability distributions. Conventionally, the problem was solved by finding barycenters. The authors point out that such approaches are unsuitable because of two pitfalls in Wasserstein barycenters. In contrast, the alternative implementations with pairwise distances and SDP relaxation can provide better clustering results.\n Strength:\nThe work points out the difference between Wasserstein K-means and conventional K-means. Good examples are provided. Through the paper, readers can realize the difficulty due to the Wasserstein barycenters, and future research efforts should be devoted to non-barycenter-based clustering algorithms.\n\nWeakness:\nMost illustrative examples are small and artificial. Although distance-based and SDP-based approaches work better for such examples, it is hard to say they are the future for the research problem because they are not scalable.\n The proposed distance-based and SDP-based approaches do not include barycenters or means as optimization variables. Can they still be called \"K-means\"?\n\n\nAlthough the proposed alternatives work better in the given examples, they are more expensive for large-scale data sets because they have a quadratic cost to the number of instances. Optimizing over the cluster membership also makes them more difficult to parallelize. All given examples are small and cannot reveal the drawback. The whole MNIST data set has 70,000 images, but the authors picked only a few hundred.\n\nHow did the authors obtain the pairwise distances for MNIST? In the checklist, the authors claimed that they described the limitations of their work but did not say where. There is no discussion or conclusion section in the paper, and I cannot find any relevant part for the limitations."
] | [
-1,
-1,
-1,
-1,
4,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"vWKvv5lf1hy",
"g6I-7EGHfg9",
"2O-AUBm9g8q",
"xqqr05hDDuh",
"nips_2022_OlGu-BXgJ-",
"nips_2022_OlGu-BXgJ-",
"nips_2022_OlGu-BXgJ-",
"nips_2022_OlGu-BXgJ-"
] |
nips_2022__4xg5moXVg | Beyond accuracy: generalization properties of bio-plausible temporal credit assignment rules | To unveil how the brain learns, ongoing work seeks biologically-plausible approximations of gradient descent algorithms for training recurrent neural networks (RNNs). Yet, beyond task accuracy, it is unclear if such learning rules converge to solutions that exhibit different levels of generalization than their non-biologically-plausible counterparts. Leveraging results from deep learning theory based on loss landscape curvature, we ask: how do biologically-plausible gradient approximations affect generalization? We first demonstrate that state-of-the-art biologically-plausible learning rules for training RNNs exhibit worse and more variable generalization performance compared to their machine learning counterparts that follow the true gradient more closely. Next, we verify that such generalization performance is correlated significantly with loss landscape curvature, and we show that biologically-plausible learning rules tend to approach high-curvature regions in synaptic weight space. Using tools from dynamical systems, we derive theoretical arguments and present a theorem explaining this phenomenon. This predicts our numerical results, and explains why biologically-plausible rules lead to worse and more variable generalization properties. Finally, we suggest potential remedies that could be used by the brain to mitigate this effect. To our knowledge, our analysis is the first to identify the reason for this generalization gap between artificial and biologically-plausible learning rules, which can help guide future investigations into how the brain learns solutions that generalize. | Accept | This paper applied ideas about generalization in the ML literature to biologically plausible architectures and learning rules. Especially, it explored links between curvature and generalization in biologically plausible learning.
There was active discussion about this paper, and three reviewers raised their scores during the rebuttal period. All reviewers felt this was a high quality paper, and that the results would be useful for later research. The closest thing to a criticism that came up during discussion was one reviewer describing the paper as a "high quality but incremental addition to the scientific literature."
Based upon the reviews, rebuttal, and reviewer discussion, I recommend paper acceptance. The authors should be sure to update their paper as discussed during the rebuttal period, and based upon the reviewer feedback.
PS -- Links between TBTT and loss surface curvature would also be of interest in learned optimization and meta-learning more broadly, where meta-training is often performed via truncated unrolls of the inner problem. | train | [
"G51mBY7UFxo",
"qF9Ik0Ie5KE",
"Ll-g0lkzSBq",
"oTFL7nwrVaQ",
"ZjrJcehtem",
"5BGtUiXWCj7",
"5Y4DoFNE5uv",
"yzVlqhvVZBr",
"fwcneIw6H0m",
"KC3llYU3e9j",
"3_MC2D5coJM",
"9dlmhrRwzoD",
"ZdVQz3Wo-71",
"w7sSzyXANf1",
"trE5VAqyXlv",
"eYCLY_vttPW",
"K0MgBpEuYJt",
"f4bafnugqmH",
"bRwfxsjhdJb",
"1DkEHgFlqMF",
"0Qn0mtD-iXX",
"2yeyDZ2bdw3",
"1OK9HSJhi9",
"nRmWsix2wj"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to take this final moment to thank the reviewer once more for a stimulating exchange and encouraging feedback. We hope the reviewer will be satisfied with the changes made to the paper and appendix, as they suggested. We would be grateful if this would enable a score increase.\n\nbest regards,\nthe authors",
" We would like to extend our huge gratitude to the reviewer for not only appreciating the key points and contributions in our Appendix, but also providing specific and feasible suggestions that can make a big impact on the accessibility of this paper. As the reviewer correctly pointed out, the Appendix provides important supporting information that helps to introduce readers to unfamiliar areas, so it is crucial to improve the visibility and presentation of these sections. Thus, we have taken this reviewer’s suggestions very seriously and outlined our updates below. We are hopeful that after having implemented this reviewer’s suggestions, the accessibility and impact of our manuscript are going to be significantly improved. \n\n1. **Please check the spelling and grammar in the Appendix as there are a few mistakes**.\n\nWe are grateful for the suggestion. Indeed, we have indeed caught multiple grammar mistakes and typos in Appendix. Our updates include but are not limited to the following. A further thorough pass for typos will be done before the camera-ready version if this paper were accepted.\n\n- Appendix A.2: changed \"Thus, the factor $\\frac{\\partial h_{l,t}}{\\partial W_{h, ij}}$ and it poses\" to \"Thus, the factor $\\frac{\\partial h_{l,t}}{\\partial W_{h, ij}}$ poses\"\n- Appendix A.2: changed \"expensive nonlocal term\" to \"expensive nonlocal terms\" (should be plural)\n- Appendix A.3: changed \"we note that comparisons BPTT and approximate rules are done at\" to \"we note that comparisons between BPTT and approximate rules were done at\"\n- Throughout Appendix A: changed to past tense at multiple places when we were describing Methods\n- Appendix B.2: changed \"how close the magnitude of the leading Jacobian eigenvalues are to 1\" to \"how close the magnitude of the leading Jacobian eigenvalue is to 1\" (should be singular)\n- Appendix B.2: changed “In the extreme scenario where the loss is convex, there is just one minimum, so the question of minima preference becomes irrelevant” to \"If the loss were convex, there would just be one minimum and the question of minima preference would become irrelevant\" (fixed verb tense)\n- Appendix B.3: changed \"higher rank case\" to \"higher rank cases\" at multiple places (should be plural)\n- Appendix B.3 (also B.1): added articles, e.g. \"an approximate rule\" instead of just \"approximate rule\", at multiple places\n- Appendix B.3: changed \"for going further down\" to \"enables descend\" to be more concise \n- Appendix B.3: changed \"for a (locally) second order loss surface\" to \"on a (locally) second order loss surface)\"\n- Appendix C: Figure 6 caption, fixed an unclosed parenthesis \n- Appendix C: Figure 7 caption, changed “Similar results were obtained if we fit to the top 50 or 100 eigenvalues” to \"Fitting to the top 50 or 100 eigenvalues resulted in similar trends\" (fixed verb tense)\n- Appendix C: Figure 9 caption, changed \"we observe TBPTT tend to\" to \"we observe that TBPTT tends to\" \n- Appendix C: Figure 11 caption, changed \"use\" to \"used\" (fixed verb tense)\n\n2. **Flesh out Appendix C. Currently it is just a collection of Figures with captions. Please write this as you would a usual section in a main body of work. It does not have to be much, but a bit more of a formal section than it currently is**.\n\nWe absolutely agree with the reviewer that having surrounding texts in Appendix C would improve the formality. We have now added texts throughout Appendix C and made sure that we referred to every single figure in Appendix C in the same manner as we would in a main body of work. Again, we colored these texts in blue. ",
" 3. **You mention in the last paragraph of the Introduction (revised edition) that \"We also discuss potential remedies used by the brain (Appendix Figure 5)\". Unless something is getting lost in the edits I don't think Appendix Figure 5 covers this, at least not enough to be mentioned like it is in the main paper. That point is interesting and I would like to see those remedies, I assume learning rate modulation is the main point of this, but how learning rate modulation may be done in the brain then should be mentioned (ideas around plasticity may be new to some ML readers and I think your clarity on these concepts in the Appendix will serve them greatly).**\n\nWe agree with the reviewer that it is unclear from Appendix Figure 5 how such remedies can be implemented. **On top of Appendix Figure 5, we should have also referenced the last paragraph of Discussion section**, where we discussed experimental predictions and how learning rate modulation could be implemented in the brain, *“We conjecture that neuromodulatory mechanisms could be coupled with these learning rules to improve the convergence behavior through our scheduled learning rate experiments (Appendix Figure 5), where an initial high learning rate could prevent the learning trajectory from settling in sharp minima prematurely followed by gradual decay to avoid instabilities. One possible way to realize such learning rate modulation could be through serotonin neurons via uncertainty tracking, where the learning rate is high when the reward prediction error is high (this can happen at the beginning of learning) [169]. Since the authors of [169] showed that inhibiting serotonin led to failure in learning rate modulation, we conjecture that such inhibition might have an impact on the generalization performance of learning outcomes.”*\n\nTherefore, **we have replaced the sentence in Introduction that the reviewer quoted with** *“In the last paragraph of the Discussion section, we discuss potential remedies implemented by the brain and provide preliminary results (Appendix Figure 5)”* in order to direct the reader to the Discussion paragraph also.\n\nAdditionally, In response to this and the previous comment, we also summarized the key points above in the text that surrounds Figure 5 in Appendix. \n\n4. **Reference points in the Appendix more clearly in the main paper. In general I think this is done well, but I think in this case your clarity in helping a reader navigate to the information they find interesting but unfamiliar in the Appendix will help.**\n\nFor this comment, we have focused on making Appendix B (Geometry and ML theory) and Appendix A (bio-plausible temporal credit assignment rules) more visible in the main text, so as to facilitate the introduction of readers to these different areas. \n\nFor Appendix B, in response to the reviewer’s remark *“For example, the equality of the Fisher Information Matrix (outer product of the Jacobian) and Hessian at a minimum in the loss landscape (a key concept in Information Geometry) is present in Appendix B”*, we have referred to that in Discussion, specifically as the ending sentence of the discussion paragraph on step size stability that the reviewer requested in an earlier comment, *“This increased stability is closely tied to Theorem 1 --- which predicts a greater dynamical stability for (the weight update difference equation of) three-factor rules --- due to the correspondence between loss' Hessian matrix and the Jacobian matrix of the weight update difference equation (the correspondence is explained in Theorem 1 proof in Appendix B).”* On top of that, we have alerted the readers in Introduction to check out Appendix B, *“we encourage the reader to visit Appendix B for the Theorem Proof and discussion on loss landscape geometry”*. \n\nFor Appendix A, we have alerted the readers in Introduction to visit Appendix A.2 to learn more about bio-plausible temporal credit assignment rules, *“for in-depth explanation of these bio-plausible temporal credit assignment rules, please visit Appendix A.2 for how these rules are implemented and why they are bio-plausible”*. \n\n",
" We are extremely grateful to this reviewer for careful reading of our response and taking that into consideration to revise their score. Moreover, we are very glad that the reviewer shares our excitement for future investigations into various regularizations with bio-plausible learning rules and how they could balance between potential benefits of large learning rates for generalization and numerical stability issues in the brain. We would like to extend our huge gratitude to the reviewer again for all their specific suggestions on simulations and discussion points that led to the improvement of this paper.",
" I appreciate the authors' careful response and additional experiments. I think the additional experiments help sharpen the claims made in the paper and increase its novelty. The explanations regarding numerical instability are particularly valuable. As the authors mention, additional regularizations can mitigate this numerical instability. Further numerical investigations into using various regularizations with biologically-plausible learning rules would be an interesting future direction, but may understandably be outside of the scope of this submission.\n\nOverall, I believe this paper can be a strong contribution to the conference. In light of the additional experiments and explanations, I have increased my rating for this submission.",
" As requested I have given some more thought to improvements that can be made in the current scope of the work. There is not much in the current scope to be improved (I include reducing the amount of dependence on prior work a change of scope). You can see my original review for my feelings on the originality/quality/significance as they have not changed.\n\nWhat has changed upon reflection is that I may have under-appreciated the quality of the Appendix. While not usually something which should impact a review too much, in the context of this work I think it is relevant to consider. Specifically, the reliance on prior work, while hindering the main paper slightly, is a real benefit in the Appendix as you have unified many fields (ML,Neuro,Geometry) well. I could see the Appendix introducing many readers to one of the fields which is not the one that drew them to the paper in the first place. For example, the equality of the Fisher Information Matrix (outer product of the Jacobian) and Hessian at a minimum in the loss landscape (a key concept in Information Geometry) is present in Appendix B, while Appendix A does a good job of introducing the reader to truncated learning rules and why they are biologically plausible. That said, the Appendix as it stands could be improved.\n\nHere are my suggestion:\n1. Please check the spelling and grammar in the Appendix as there are a few mistakes.\n2. Flesh out Appendix C. Currently it is just a collection of Figures with captions. Please write this as you would a usual section in a main body of work. It does not have to be much, but a bit more of a formal section than it currently is.\n3. You mention in the last paragraph of the Introduction (revised edition) that \"We also discuss potential remedies used by the brain (Appendix Figure 5)\". Unless something is getting lost in the edits I don't think Appendix Figure 5 covers this, at least not enough to be mentioned like it is in the main paper. That point is interesting and I would like to see those remedies, I assume learning rate modulation is the main point of this, but how learning rate modulation may be done in the brain then should be mentioned (ideas around plasticity may be new to some ML readers and I think your clarity on these concepts in the Appendix will serve them greatly).\n4. Reference points in the Appendix more clearly in the main paper. In general I think this is done well, but I think in this case your clarity in helping a reader navigate to the information they find interesting but unfamiliar in the Appendix will help.\n\nI trust the authors will make the necessary changes and I do not intend any of these to be too drastic. But rather the authors can make small changes in line with these suggestions and I do believe it will go a long way. Please let me know when a new version of the paper is available and I will increase my score.",
" Given that the author-reviewer discussion period is coming to a close soon, we kindly request the reviewer to let us know if our responses have resolved their concerns, and if there are any other questions that we can address. We are hopeful the reviewer will recognize that our recent work addresses initial concerns. We we are keen to further improve our paper in light of a constructive author-review discussion.\n",
" Given that the author-reviewer discussion period is coming to a close soon, we kindly request the reviewer to let us know if our responses have resolved their concerns, and if there are any other questions that we can address. We are hopeful the reviewer will recognize that our recent work addresses initial concerns. We we are keen to further improve our paper in light of a constructive author-review discussion.\n",
" Given that the author-reviewer discussion period is coming to a close soon, we kindly request the reviewer to let us know if our responses have resolved their concerns, and if there are any other questions that we can address. We are hopeful the reviewer will recognize that our recent work addresses initial concerns. We we are keen to further improve our paper in light of a constructive author-review discussion.\n",
" We thank the Reviewer for their response and follow-ups. We agree that there is further exciting theoretical work ahead to address stability, and that this falls outside the scope of the current paper, which offers considerable contributions as it stands.\n\nAs the reviewer points out, we too believe the paper would be a great contribution to the scientific community as it presents innovative work at the intersection of ML theory and computational neuroscience. Furthermore, we think that NeurIPS22 is an ideal venue and time to do so. We feel the reviewer accurately recognizes the value and originality of such cross-disciplinary work, which can often suffer from shortcomings in review processes as it spans distant areas of expertise. As such, we would be extremely grateful if the reviewer would identify further areas we could improve within the paper's present scope to facilitate a score increase. We understand from comments the reviewer is supportive of this work being published and would be grateful for the opportunity to earn their continued support in the final stages of reviews.\n\nvery best regards,",
" Thank you to the authors for their response. I have checked the added content of the paper and it appears correct (I appreciate the authors making the changes clear and this process easy). I think the authors have understood the point of my suggestion to discuss learning rate in terms of stability; and the added paragraphs in the Discussion are helpful and sufficient for this work. I would be very interested to see a more theoretical approach to understanding the relationship, however to me that is clearly beyond the scope of this work. The point on there being two types of instability is quite nuanced and I appreciate the authors being clear on that in the rebuttal.\n\nMy current score (of 6) still reflects my sentiments on the work and so I will leave it as such for the moment (I will follow the discussion with my fellow reviews and raise any points if necessary).",
" We would like to thank all reviewers for their valuable suggestions and constructive feedback. We worked very hard to **address all concerns** raised by the reviewers, and revised the manuscript in light of their suggestions. As a result, we believe our manuscript is worthy of publication and would be of wide interest to the NeurIPS community. Furthermore, as recognized by some reviewers, this work is one of the first to leverage key theoretical tools from ML optimization theory to better understand biologically plausible learning rules. As NeurIPS is rooted in both theoretical neuroscience and AI, we cannot think of a better venue for this type of cross-disciplinary work, which we hope will seed more follow up.\n\nThis review process has been extremely valuable, and led to notable improvements. This is not always the case, and we feel fortunate for attentionate, engaged, and reasonable reviews. In particular, we implemented reviewer q4yu’s comments to significantly improve the clarity of this paper and we would like to extend huge thanks to their valuable comments. We also have added additional simulations in response to reviewer eejL’s specific suggestions on how to sharpen some of our claims even more. We have added explanations to further highlight the key contributions of this work thanks to reviewer 22dn’s comments. Thanks to the great discussion points made by reviewer dP4V’s, we have elaborated our discussions on future work and relationship of work to other important papers in the area. The changes to the manuscript are colored in <span style=\"color:blue\">blue</span> in the present version for easy referencing. Responses to reviewer feedback are provided for each review independently. Overall, we believe that these changes have significantly improved both the content and presentation of our submission. \n\nWe also very much appreciate the positive comments regarding the soundness of the paper from reviewers eejL, dP4V and 22dn, which include but are not limited to:\n\n- *“There are no points during the paper which stand out as being unreasonable, and from a scientific method point of view the work appears sound, with each step being justified either from a comp-neuro or ML perspective. Balancing the two topic was done well and I think that is a potentially understated positive for this work\"* from reviewer dP4V\n\n- *”The theory is sound and well justified, and the experiments are generally comprehensive”* from reviewer eejL. \n\nAdditionally, we would like to thank reviewers dP4V and eejL for recognizing several key contributions of this paper that could motivate exciting future works, such as: \n\n- *“Significant to the comp-neuro community as a step towards introducing some of the more theoretical ML concepts such as loss landscape curvature”* from reviewer dP4V\n\n- *“Generalization in biologically-motivated learning rules has been studied previously. This paper provides surprising new insights into the nature of solutions found by biologically-plausible learning rules. Notably, to my knowledge, this paper is the first to quantitatively explain with high precision the relatively poor generalization performance of biologically-plausible learning rules (see Figure 4)”* from reviewer eejL",
" We would like to extend our gratitude to the reviewer for an excellent summary of our work, careful reading of our manuscript, and their specific suggestions on simulations to strengthen the paper. We would also like to thank the reviewer for appreciating several key contributions of the paper and shared vision on future directions. We are hopeful that the improvements and answers provided below will address the reviewer's concerns. \n\n1) **“Some minor issues: The W+ and W- notation in equations 7 and 8 is not explained. Typo- menchmark on line 140. Typo- assymptotic on line 1054.”**\n\nWe really appreciate the reviewer’s careful reading of our manuscript and finding these typos. We have fixed the typos and explained the W+ and W- notation. \n\n2) **“Do generalization gaps correlate with Hessian eigenvalues in networks trained with the same rule?”**\n\nWe thank the reviewer for the excellent suggestion to strengthen the argument. We have now created a scatter plot for leading Hessian eigenvalue and generalization gap with the same rule. **We repeated this for each of BPTT and three-factor rule**, and we added to plots to Appendix Figure 12 (and referred to in Results). We decided to perform the test on three-factor because the generalization curvature correlation has not been demonstrated previously for bio-plausible temporal credit assignment rules; we decided to perform the test on BPTT simply as a test of agreement with existing literature on generalization curvature correlation demonstrated for (S)GD. As expected, we found that generalization gap significantly correlate with leading Hessian eigenvalue in these experiments. Ideally we'd like to repeat this for all tasks but simulated one task due to time constraint. \n \n3) **“This paper considers the generalization gap of different algorithms by roughly fixing their training performance and comparing their test performance. How would the results change if the test performance were fixed and the training performance were varied (by stopping the rules at different points during training, for example)?”**\n\nAgain, we thank the reviewer for the great suggestion. We stopped BPTT when it reached the same test accuracy as three-factor and found that the leading Hessian eigenvalue and generalization gap is still significantly higher for three-factor. This should come as no surprise as we don’t see any point along the BPTT Hessian eigenvalue trajectory to match that of terminal Hessian eigenvalue attained by three-factor rule. We added these new results to Appendix Table 1 and referred to it in the Results section.\n\n4) **“How would a three-factor rule perform empirically when scaled to match the along-gradient update sizes of gradient descent?”**\n\nThis is a very interesting question. We should clarify that when we match the along-gradient update size, this may require us to increase the learning rate for three-factor rule by a factor of 20 times depending on the value of ρ (ρ vary depending on the task and model, but we’ve observed values ranging from 0.02 to 0.3). This significant increase in learning rate can quickly lead to numerical overflow, resulting in values of NaN in the network. This observation is typical when a very large learning rate is used. Once that happens, we cannot proceed with the training. We have added this explanation to the Results section in the main text. \n\nIn response to the reviewer’s comment, we would still like to strengthen the matching step length experiment by doing additional runs but without numerical overflow, so we decided to go in the middle of the reviewer’s suggestion vs what was done in Figure 4: we repeated the experiment at three times the learning rate used in Figure 4. We again found that matching along-gradient update sizes lead to similar curvature convergence. This new plot can be found in Appendix Figure 12C and we referred to it in the Results section. \n\n**For response part (2/2), please proceed to our comment below.** ",
" 5) **“At what point does numerical instability occur in biologically-plausible learning rules?”**\n\nThis is an excellent question. Here are our main points in response to the reviewer’s question on which point this numerical instability occurs, and how to interpret this in biological contexts:\n\n- How can what we call numerical noise manifest in the brain? In digital computers, numerical instabilities are an issues because of rounding errors. While the analog nature of biology may prevent this, the same problems that lead to numerical instabilities, such as big ranges between quantities added or multiplied, remains an issue for biology since quantities must be stored in noisy activity patterns of neurotransmitter release. In other words, such noise could limit the \"numerical precision\" in the brain. \n\n- At which point such instability occurs could depend on homeostatic mechanisms in the brain, which could regulate quantities back to the \"optimal operating range\". It would certainly be an interesting future study to see if such homeostatic mechanisms can help alleviate the numerical stability issue in the brain, hence enabling larger \"learning rates” to be used for bio-plausible temporal credit assignment rules so as to find flatter minima. \n\nAs for which point this numerical instability occurs digitally: \n\n- It might be worthwhile to start by clarifying that there could be at least two kinds of instability: (A) the instability in the sense that the second order term in Taylor expansion of loss becomes significant relative to the first order term due to large step size; (B) numerical instability in the sense that numerical overflow can occur from large weight values in simulations. \n\n- For the explanation in our initial submission, we mainly considered (B) numerical instability. Large weight updates could result in accumulation of weight values. Of course, there are many regularization techniques to alleviate this issue (e.g. weight decay), but relative to BPTT, three-factor is more prone to this issue due to the additional orthogonal error component added to the weights, as explained in the manuscript.\n\n- If we are indeed considering instability in terms of (B), then determining at which point large weights could lead to numerical overflow would require careful calculation of how neuron activity propagates over time steps in relation to weight values. This calculation would be model and task dependent, and even precision dependent (e.g. 32-bit vs 64-bit). More importantly, this calculation would depend on the consistency of alignment of weight direction and update direction. If the two directions are consistently aligned, then the weight values just build up quicker. \n\n- If we are considering instability in terms of (A), then indeed as the reviewer said, the eigenvalues of the Hessian would matter, as it determines the magnitude of the second order loss term would indeed depend on the eigenvalues of the Hessian. In addition, how well aligned the update is with top Hessian eigenvectors also matters. As an extreme example, if the update lies in the null space of Hessian (i.e. aligned with eigenvectors with associated eigenvalues as 0), then the second order term would be 0. That ties nicely to a discussion point in the paper on how noise direction can affect generalization (please see also reviewer dP4V’s comments). \n\n- This might not be related to the reviewer’s question, but one thing to note is that the two kinds of instability can interact. For instance, achieving instability in (A) would require a large enough learning rate, so that the second order Taylor term of loss expansion can exceed the first order term. However, the instability in (B) can make it impossible to use large enough learning rates to achieve that desired instability in (A). \n\nTo reflect these points above, we have now added a brief explanation in Results on numerical instability that occurs digitally, right after we first mentioned instability. After that explanation, we alerted the reader to discussions on numerical instabilities in the brain in Discussion, where we added a few sentences on numerical instabilities in the brain in the last paragraph of Discussion. \n",
" We would like to extend our huge gratitude to the reviewer for an excellent summary of our work, careful reading of our manuscript, and most importantly, for their astuteness in highlighting one of the most important points of the paper: noise from truncation does not affect generalization in the same manner as stochastic gradient noise. We also thank the reviewer for the shared vision and the comments on interesting future directions that this work can inspire. We are hopeful that the improvements and answers provided below will address the reviewer's concerns.\n\n1) **Add to the discussion at the end about truncation noise affecting the stable step size; the one citation below [1] may be of use for this purpose**\n\nWe are grateful that the reviewer is creating this very interesting discussion thread. We have also reflected points below in Discussion in response to the reviewer's great comment. We also would very much appreciate it if the reviewer could please be so kind as to let us know if we misunderstood anything from the reviewer's comment. \n\nIf we understood correctly, the reviewer would like us to discuss how truncation noise affects the threshold at which the learning rate (or step size) would be flipped from being stable to unstable. If our understanding is correct, a step size is defined to be stable (on page 5 in [1]) if it is small enough such that the sharpness never rises to 2/$\\eta$. For a full batch gradient descent case (as in [1]), sharpness crossing 2/$\\eta$ would correspond to when the 2nd order Taylor expansion term of loss catches up to the 1st and learning could \"catapult\" into a flatter region to accommodate the step size (using the language of [1-2]). Our results suggest that truncation noise can make it harder for the “catapult'' behavior to happen. If the noise is not aligned with the few leading Hessian eigenvectors with large outlier eigenvalues but aligned with the eigendirections with negligible eigenvalues, then it can only have limited contribution to the 2nd Taylor term (see also Appendix Eq. 33-36); this ties nicely back to the reviewer’s and our remark on how noise direction matters. Moreover, the noise term demands a smaller learning rate to be used to avoid numerical issues (as explained in the paper). Because the noise is not adding much to the 2nd Taylor term and demands a smaller learning rate, it would weaken the 2nd Taylor term and hence make learning harder to “catapult” into a flatter region, thereby increasing the threshold for step size stability. This is all consistent with the results on convergence to high-curvature loss landscape regions for bio-plausible temporal credit assignment rules seen in this study. \n\nWe would also like to thank the reviewer for bringing up [1], as the series of “catapult” behavior mentioned in [1] and [2] — happen in this so-called Edge of Stability (EoS) regime — seems closely related to our results. Along with [1], we have cited a few other works [2-5] in the updated manuscript that touched upon EoS explicitly or implicitly. \n\nAlso, we are not sure if this is relevant to the reviewer's comment, but to potentially clarify our above paragraphs better, we would like to mention instability could mean at least two things: (A) the instability in the sense that the second order term in Taylor expansion of loss becomes significant relative to the first order term due to large step size; (B) numerical instability in the sense that numerical overflow can occur from large weight values in simulations. For our explanation (in the initial submission) on how orthogonal noise can contribute instability, we mainly considered (B) numerical instability: large weight updates could result in accumulation of weight values. Of course, there are many regularization techniques to alleviate this issue (e.g. weight decay), but relative to BPTT, three-factor is more prone to this issue due to the additional orthogonal error component added to the weights, as explained in the manuscript. One thing to note is that the two kinds of instability can interact. For instance, achieving the instability in (A) would require a large enough learning rate, so that the second order Taylor term of loss expansion can exceed the first order term. However, the instability in (B) can make it impossible to use large enough learning rates to achieve that desired instability in (A). \n\n**For response part (2/2), please proceed to our comment below.** ",
" 2) **“For the rebuttal period I would find it helpful for the authors to further address the concept of different kinds of noise impacting generalization differently.”**\n\nAgain, we would like to thank the reviewer for the stimulating discussion! Here are our comments in addition to our response to the previous comment above. To begin discussing the impact of different kinds of noise, we can start by deciding on the aspects of noise we would like to focus on. One aspect could be **direction**, which has been discussed in the manuscript and briefly mentioned in the previous comment. \n\nHowever, another aspect we did not discuss in the manuscript but the reviewer brought up could be **consistency**, which could also play an important role in generalization. Intuitively, if noise is constantly biased toward one direction, then the weight could build up quickly along that direction, and the increased weight norm can significantly limit the learning rate to avoid numerical instability; and we have discussed how small learning rate can hurt generalization in the manuscript. However, if the noise direction fluctuates and cancels out each other across update steps, then weight may not build up as much so the numerical instability issue is less of a concern. \n\nEchoing the reviewer’s comments, we look forward to seeing future studies on more characterization of different sources of noise that appear in biological systems (e.g. learning rule ese directions are). These noises can have different structures than the kinds of noise in optimization. We believe the ML community provides valuable tools to examine how different noises from neural systems can impact generalization, and if there are certain biological noise that make them more favorable for generalization. We have now added discussion points to the Discussion section reflecting this point. \n\n**References:**\n\n[1] Cohen, Jeremy M., et al. \"Gradient descent on neural networks typically occurs at the edge of stability.\" arXiv preprint arXiv:2103.00065 (2021).\n\n[2] Aitor Lewkowycz, Yasaman Bahri, Ethan Dyer, Jascha Sohl-Dickstein, and Guy Gur-Ari. The large learning rate phase of deep learning: the catapult mechanism. arXiv preprint arXiv:2003.02218 (2020).\n\n[3] Stanisław Jastrzębski, Maciej Szymczak, Stanislav Fort, Devansh Arpit, Jacek Tabor, Kyunghyun Cho*, and Krzysztof Geras*. The break-even point on optimization trajectories of deep neural networks. In International Conference on Learning Representations (2020). \n\n[4] Justin Gilmer, Behrooz Ghorbani, Ankush Garg, Sneha Kudugunta, Behnam Neyshabur, David Cardoze, George Edward Dahl, Zachary Nado, and Orhan Firat. A loss curvature perspective on training instabilities of deep learning models. In International Conference on Learning Representations (2022).\n\n[5] Sanjeev Arora, Zhiyuan Li, and Abhishek Panigrahi. Understanding gradient descent on edge of stability in deep learning. arXiv preprint arXiv:2205.09745 (2022).truncation, data noise) and their bias/variance properties (noise direction and how consistent th\n",
" We would like to extend our gratitude to the reviewer for their valuable suggestions on how to improve the clarity of the paper. **We are hopeful that the presentation of our manuscript has been significantly improved after incorporating reviewer qy4u’s comments.**\n\nWe would also like to make a general remark that due to the nine-page limit, we unfortunately had to include a lot of content (e.g. hyperparameter tuning, Theorem proofs, additional simulations) in the Appendix. In response to the reviewer’s comments, we incorporated direct references to Appendix content in the main text, in order to make such content more visible. Below, we directly address the reviewer's points, and describe related changes made to the manuscript.\n\n1) **“The authors claim their results generalize beyond MSE loss but do not demonstrate this empirically.”**\n\nWe used cross-entropy loss for the sequential MNIST task and binary cross-entropy loss for the delayed match-to-sample task. However, we understand where the confusion comes from. In the main text, we provided only the equation for MSE loss, although we provided provided the equation for cross-entropy loss in Appendix and alerted to reader to find that in the main text (please see *“We consider cross-entropy loss for classification tasks (Methods in Appendix A)”* in Section 3.1). In response to the reviewer’s point, we have added an equation for classification loss the the main text right beneath the MSE loss equation, in order to avoid this confusion. \n\n2) **“The authors make claims of potential remedies, such as using larger learning rates early in training but do not justify this claim theoretically or demonstrate it empirically… Were potential remedies tested?”**\n\nAgain, we apologize if this was not clear. We have tested potential remedies, which can be found in Appendix Figure 5. We wished to include the figure in the main text, but unfortunately we had to move it due to the page limit. To make Appendix Figure 5 more prominent, we have also referred to it in Introduction. In our initial submission, this figure and the results of empirical testing was referenced in Results and Discussion. \n\n3) **“A description of training parameters and procedure used for the biologically-plausible learning rules is not presented in the main test.”**\n\nIndeed, detailed training procedures (including hyperparameter tuning) are not in the main text. We moved training details to Appendix A due to the nine-page limit. We have alerted the reader in Results to look for training details in Appendix A. \n\n4) **“It is uncertain and not addressed if the generalization gap can be reduced by fine-tuning existing truncated BPTT algorithms.”**\n\nAll of our runs have been hyperparameter tuned and training details can be found in Appendix A. We apologize if this detail is not clear, and this is now clearly stated in the revised main text.\n\nIn addition, we explained the cause for our empirical observations using Theorem 1 and Figure 4, which suggest that the observed differences between rules cannot be reduced by fine-tuning learning rates. Please see our explanation on Figure 4 and Theorem 1 in Results. We have now added the following sentence to that section to clarify this point: *“Because of the numerical issues associated with increasing the learning rate for such rules, the differences in generalization and curvature convergence between learning rules cannot be reduced by fine-tuning the learning rate”*. \n\n**For response part (2/2), please proceed to our comment below.** ",
" 5) **“The authors cite almost 100 articles in section 2.1. Though these articles may be relevant to biologically-plausible learning in general it is not certain they are directly relevant to networks trained with truncated BPTT which is the focus of this study.”**\n\nWe thank the reviewer for this comment. \n\nThere is a lot of relevant literature on generalization gap, loss landscape, bio-plausible rule and RNN model for the brain. **To our best knowledge, our work is one of the first that introduces theoretical ML concepts such as loss landscape curvature (and how that affects generalization) to the computational neuroscience community** (reviewer dP4V also acknowledges this). As such, we tried to be as comprehensive in our citation as we can to cover the different research areas that inspired this work. We apologize that it turned out to be crowded, but we felt that it’s a good reference service to the community. \n\nWe agree with the reviewer that existing bio-plausible learning rules for RNN, and how they are all truncation-based (to our knowledge), should be made more prominent in Related Works. To reflect that, we have added the following to Section 2.1, *“These existing bio-plausible rules for training RNNs [11-13] are truncation-based (which is the focus of this study), so that the untruncated terms of the gradient can be assigned with putative identities to known biological learning ingredients:...”*\n\n6) **“The authors generalize their claims to all biologically-inspired learning rules but limit their study to three specific learning rules and three tasks. Specifically, the authors focus on truncated BPTT in RNNs on three tasks with MSE loss. These results may hold only for truncated backpropagation through time but not for all biologically-plausible learning rules. The style implies these results hold for all biologically-inspired learning rules and all loss functions. If this is true, it should be presented in the main text.”**\n\nWe thank the reviewer for the great suggestion. **Indeed, we focus on temporal credit assignment rules, and we specified that in the title in our initial submission**. To our knowledge most if not all existing bio-plausible temporal credit assignment rules are truncation-based, so that the approximate gradient would involve only terms that can be assigned with putative identities to known biological learning ingredients. In response to the reviewer’s suggestion, we have now further clarified this point throughout the manuscript. \n\nIn addition, we have added a discussion sentence for future investigations to our analysis to other bio-plausible learning systems beyond RNNs, *“This indicates that noise with different properties (e.g. direction) could affect generalization differently, thereby motivating future investigations into applying our curvature-based analysis to broader range of bio-inspired learning systems and examine biological noise with different properties”*. On that note, since our work is one of the first that introduces theoretical ML concepts such as loss landscape curvature (and how that affects generalization) to the computational neuroscience community (to our knowledge and also mentioned by reviewer dP4V), we hope our work will inspire more future studies that leverage the remarkable progress from the ML community to study learning and generalization for a broad range of neural mechanisms and systems. \n\nRegarding loss function, we used cross-entropy loss for classification tasks. Please see one of our earlier responses. It’s true that we only ran three tasks empirically, but we expect our conclusions to generalize beyond these three tasks due to our theory. \n\n7) **“In figure 2 (a, b, c), does random initialization refer to weight initialization? Are all parameters of the learning algorithm held constant or are the distributions computed across all parameter configurations, e.g., learning rate ..etc? It appears that truncated BPTT can achieve the same generalization gap as BPTT. In the case of sMNIST it appears that it achieves zero generalization gap with a greater probability than full BPTT.”**\n\nWe thank the reviewer for the astute observation. Yes, indeed it is possible for these truncation-based bio-plausible rules to achieve the same generalization gap as BPTT for some runs. However, the focus here is not to look at a few runs, *but the overall trend across many runs*. We hope the scatter plot (2d-f) can clarify the overall trend (worse and more variable generalization performance) and we are also going to make a note of this point in the caption. \n\nFor the same learning rule, the same set of hyperparameters are used across runs. This is because hyperparameters should be tuned during the validation step before model deployment (please refer to Appendix A for parameter tuning details). \n\nYes, “random initialization” refers to weight initialization. Thanks to the reviewer’s note, we have now clarified that in the caption. ",
" We appreciate the reviewer’s supportive comments on the soundness and presentation of the work. As described below, we have built on the reviewer's comments to improve the paper, and we hope we can clarify some of the more contentious points raised. Chiefly, we would like to start by addressing the following:\n\n1) **“It is well known that bio-plausible methods generally perform worse than SOTA back propagation. This work provides further empirical evidence that this is the case, but I do not find these results very surprising.”**\n\nWe agree with the reviewer that it is well known that bio-plausible methods generally perform worse than their ML counterpart. Importantly, we believe the reasons behind this performance gap are not well understood and constitutes an important gap in knowledge. As such, **the main goal of our paper is to make steps toward elucidating some of the reasons behind this phenomenon**. While we contribute a novel systematic evaluation of generalization gaps for a family of bio-plausible rules, we agree the results of these numerical experiments may not be surprising. This is expected, as they reproduce generally understood cases, and act as the starting point of our main contributions: a mechanistic explanation of the reasons for poorer generalization for existing bio-plausible temporal credit assignment rules. To this end, we leverage theoretical tools from optimization theory and demonstrate, via theorems that match experiments, that bio-plausible rules for temporal credit assignment tend to favor higher curvature minima in the loss landscape. In doing so, we also uncover the importance of distinct sources of variability in bio-plausible rules.\n\nWe feel that our contribution sheds light on general mechanisms shared by several proposed bio-plausible learning rules, and that it is a valuable contribution to the existing literature. Importantly, we highlight the novelty in bridging the fields of computational neuroscience, AI, and ML optimization. We take the liberty to quote reviewer eejL who we feel accurately captures the scope of the present study: *“Notably, to my knowledge, this paper is the first to quantitatively explain with high precision the relatively poor generalization performance of biologically-plausible learning rules (see Figure 4).”*\n\nFurthermore our study suggests specific future studies and experiments on learning, and suggests to look beyond learning rules in isolation and consider biological ingredients that can interact with learning rules for better performance (as mentioned in Discussion). We discussed learning rate modulation as one such potential ingredient that may lead to better performance (Appendix Figure 5). In addition, we also believe our paper has additional benefits for the scientific community. \n\nIndeed, our analysis finds that noise from truncation does not affect generalization in the same manner as stochastic gradient noise. This motivates future deep learning (and potentially computational neuroscience) research on how different kinds of noise can impact learning and generalization differently, as mentioned by reviewer dP4V. \n\nMore importantly, our work is one of the first that introduces theoretical ML concepts such as loss landscape curvature (and how that affects generalization) to the computational neuroscience community, as mentioned by reviewer dP4V. In doing so, we hope to inspire more future studies that leverage the remarkable progress from the ML community to study the impact of various neural mechanisms on learning, particularly the role of different kinds of noise that occurs in neural systems that may differ from the ones encountered in optimization. \n\nWe have now added the above points to the Introduction and Discussion sections to further highlight the contributions of our paper.\n\n**For response part (2/2), please proceed to our comment below.** ",
" 2) **“Are there other advantages to bio-plausible methods or advantages in terms of generalization can only be seen on more complex tasks?”**\n\nWe thank the reviewer for the excellent question. We know the brain excels at generalization compared to most SoTA artificial systems on several tasks, so it’s logical to ask if there are advantages to bio-plausible methods not seen in this study, perhaps on more complex tasks. \n\nBefore answering the question directly, we would like to remind the reviewer that there are a number of biological ingredients not explored in our study, notably architecture. As framed in introduction and explained in discussion, this paper examines learning rules in isolation as a starting study for bringing theoretical ML concepts such as loss landscape curvature to study biological learning systems. We hope our work can inspire many exciting future studies that apply our analysis to examine the impact of a diverse array of neural circuit elements on generalization and understand why the brain generalizes so well. \n\nComing back to the reviewer’s question, while we may not be able to thoroughly examine this point across many tasks empirically or theoretically given the short rebuttal period (we intend to leave this as future work), our speculation is that bio-plausible rules, when combined with certain architecture, could enable better generalization for specific tasks. This speculation is backed by a recent Neuron paper that investigates how anti-Hebbian rule achieves better OOD generalization than its ML counterpart for familiarity detection task after the network architecture has been meta-learned [1]. The authors nicely explained how the anti-Hebbian learning dynamic is natural for this specific task. However, the study only examined the familiarity detection task. On the other hand, our analysis, by using loss landscape curvature as a tool for studying generalization, is task agnostic. Our analysis, of course, would be limited by to what extent loss landscape curvature can explain generalization, as explained in Discussion. \n\n[1] Danil Tyulmankov, Guangyu Robert Yang, and LF Abbott. Meta-learning synaptic plasticity and memory addressing for continual familiarity detection. Neuron, 110(3):544–557, 2022.",
" - This paper investigates the ability of existing bio-plausible temporal credit assignment rules to generalize. Finding that training with bio-plausible learning rules results in models that have larger and more variable train/test generalization gaps. \n- It then compares the loses’ hessian eigenspectrum of bio-plausible and SoTA learning rules finding that bio-plausible rules tend to approach high curvature regions in synaptic weight space\n- It suggest an explanation for bio-plausible rules preference for high curvature regions based on worse alignment to the true gradient Strengths\n- This paper is well written and clear\n- It analyzed several learning rules and the results support the conclusion that bio-plausible methods don't generalize well\n- The theoretical results offer interesting insights about the relationship between curvature and gradient alignment\n\nWeakness\n- It is well known that bio-plausible methods generally perform worse than SOTA back propagation. This work provides further empirical evidence that this is the case, but I do not find these results very surprising.\n Are there other advantages to bio-plausible methods or advantages in terms of generalization can only be seen on more complex tasks? yes",
" The authors investigate the difference in the generalization gap in RNNs trained to minimize MSE loss with backpropagation through time and truncated BPTT. The authors claim the difference is caused by the landscape curvature of the loss, specifically, truncate BPTT approaches high-curvature regions in the synaptic weight space. The authors propose a theoretical argument to explain this phenomenon based on the first Hessian eigenvalue. The authors claim this result holds for all existing bio-inspired learning rules in RNNs and for different loss functions. Strengths:\n\n* The authors address the important issue of how biological systems learn and adapt to stimuli. Specifically, the authors address that RNNs trained with truncated BPTT have a greater generalization gap than the full BPTT on three existing tasks.\n* The authors focus on a well-defined metric, the 1st Hessian eigenvalue, in order to quantify their results.\n* The authors provide a theoretical analysis that suggests a cause for the difference in the generalization gap between full BPTT and truncated BPTT\n\nWeaknesses:\n* The authors generalize their claims to all biologically-inspired learning rules but limit their study to three specific learning rules and three tasks. Specifically, the authors focus on truncated BPTT in RNNs on three tasks with MSE loss. These results may hold only for truncated backpropagation through time but not for all biologically-plausible learning rules. The style implies these results hold for all biologically-inspired learning rules and all loss functions. If this is true, it should be presented in the main text. \n* The authors claim their results generalize beyond MSE loss but do not demonstrate this empirically. \n* The authors make claims of potential remedies, such as using larger learning rates early in training but do not justify this claim theoretically or demonstrate it empirically. \n* It is uncertain and not addressed if the generalization gap can be reduced by fine-tuning existing truncated BPTT algorithms.\n* A description of training parameters and procedure used for the biologically-plausible learning rules is not presented in the main test.\n* The authors cite almost 100 articles in section 2.1. Though these articles may be relevant to biologically-plausible learning in general it is not certain they are directly relevant to networks trained with truncated BPTT which is the focus of this study. \n\n\n Were potential remedies tested?\n\nIn figure 2 (a, b, c), does random initialization refer to weight initialization? Are all parameters of the learning algorithm held constant or are the distributions computed across all parameter configurations, e.g., learning rate ..etc? It appears that truncated BPTT can achieve the same generalization gap as BPTT. In the case of sMNIST it appears that it achieves zero generalization gap with a greater probability than full BPTT. NA ",
" The paper aims to study the generalizability of biologically plausible learning rules for RNNs compared to backprop through time which is not bio-plausible. This work relies heavily on the relationship between the curvature of the loss landscape and the generalizability of the model: that wider minima are more noise tolerant and generalize better. Empirically it is shown that bio-plausible learning rules generalize worse than the common implausible ML learning rules. This is then theoretically explained as being due to the instability introduced by the truncation of the true backprop gradient being used by the bio-plausible rules. Specifically, to avoid diverging with the bio-plausible rules a smaller learning rate can be used which makes gradient descent more susceptible to converge to steeper minima. # Strengths\n## Originality\nThis paper addresses a topic which I have not seen in the computational neuroscience literature and does so with care for both the machine learning and computational neuroscience considerations. I think the primary novelty of this work however comes in at the end with the finding that the noise from truncation does not affect generalization in the same manner as noise from smaller batch sizes for example (truncation noise hinders generalization while other forms of noise improve generalization by driving learning out of sharp minima). This is a new consideration to me and lead me to start thinking around whether the consistency of noisy direction of learning plays a role.\n\n## Quality\nThere are no points during the paper which stand out as being unreasonable, and from a scientific method point of view the work appears sound, with each step being justified either from a comp-neuro or ML perspective. Balancing the two topic was done well and I think that is a potentially understated positive for this work.\n\n## Clarity\nThe paper is well written and I found it to be clear and concise. The figures are well made and helpful.\n\n## Significance\nThe work appears significant from two perspectives. Firstly, work at the intersection of comp-neuro and ML is important and potentially very fruitful, however, not very common (at least not at the level of insight for both fields offered by this paper). Thus, I think this is significant to the comp-neuro community as a step towards introducing some of the more theoretical ML concepts such as loss landscape curvature. Secondly, this appears significant as it brings up the interesting concept that some noise may help generalization while other forms of noise hurt generalization. It does so in a manner which is also very useful from a learning stability perspective which also uses the Hessian matrix [1]. Thus from a theoretical ML perspective I think this work offers interesting insight and potential inspiration for future work.\n\n# Weaknesses\n## Originality and Significance\nIt must be noted that this work relies heavily of previous work in theoretical ML and tends to confirm what is already known (wider minima generalize better). That said, I think just because the results are not surprising does not mean they are not valuable. I am glad that this work has been done and acknowledge that it was worthy of confirming that the findings from ML carried towards bio-plausible learning rules. Indeed it seems to me that this work can still inspire future directs of ML work as I point out above. Thus I feel the strengths for originality and significance do outweigh this weakness.\n\nThat said, if the authors were to add to the discussion at the end about truncation noise reducing the stable step size I would be inclined to increase my rating to a 7 or an 8. The one citation below may be of use for this purpose. I do think that this paper is worth of acceptance and is of general interest to the machine learning and comp-neuro communities.\n\n[1] Cohen, Jeremy M., et al. \"Gradient descent on neural networks typically occurs at the edge of stability.\" arXiv preprint arXiv:2103.00065 (2021). Assuming I have understood the work and not misrepresented the fact above, I have no questions for clarification at present. For the rebuttal period I would find it helpful for the authors to further address the concept of different kinds of noise impacting generalization differently. The authors were clear that this study was not comparing bio-plausible learning rules but rather confirming that a well-established concept in ML carried to bio-plausible rules. This is the main limitation but it was clearly acknowledged. They also acknowledge that the relationship between curvature and generalization, while being the main point of study in this work, is messy and state that other factors may be at work for generalization.",
" The paper investigates generalization and loss landscape curvature in RNNs trained with biologically-motivated learning rules. The paper finds empirically that generalization gaps are correlated with loss landscape curvature, with higher curvature correlating with larger generalization gaps. Next, the paper shows theoretically that approximations to gradient descent produce more curved solutions than gradient descent. This is because gradient descent approximations take smaller steps in the direction of the gradient and therefore are unable to escape highly curved local minima. The component of updates orthogonal to the gradient is less relevant to generalization. Experiments on multiple datasets support this explanation. Finally, the authors demonstrate empirically that appropriate learning rate scheduling in biologically-plausible learning rules can significantly enhance performance. **Originality**\nAlthough generalization in biologically-motivated learning rules has been studied previously, this paper provides surprising new insights into the nature of solutions found by biologically-plausible learning rules. Notably, to my knowledge, this paper is the first to quantitatively explain with high precision the relatively poor generalization performance of biologically-plausible learning rules (see Figure 4). \n\n**Quality**\nThe theory is sound and well justified, and the experiments are generally comprehensive. The authors may want to consider some additional experiments to sharpen their claims. \n\nFirst, the authors may want to investigate whether generalization gaps correlate with Hessian eigenvalues in networks *trained with the same rule* (rather than aggregating results across rules as done in Figure 2). This would help demonstrate the generality of the claim that generalization is tied to loss landscape curvature. \n\nSecond, the authors may want to consider an additional variant of the three-factor rule where the update sizes are scaled such that the gradient-parallel component matches that of gradient descent. As the authors argue in equation, this can cause numerical instability. Nevertheless demonstrating this empirically would be helpful and would nicely complement the \"three-factor, theory\" rule.\n\nFinally, the paper seems to suggest that one of the main limitations of biologically-plausible learning rules is that using large update sizes for them leads to numerical instability while small update sizes leads to poorly generalizing local minima. Further explaining at which point this numerical instability occurs would be valuable (e.g. is the maximum stable learning rate limited by the eigenvalues of the Hessian?).\n\n\n**Clarity**\nThe paper is well written. The figures are particularly well illustrated and clear. The mathematical notation is generally adequately introduced and used.\n\nSome minor issues:\nThe W+ and W- notation in equations 7 and 8 is not explained.\nTypo- menchmark on line 140.\nTypo- assymptotic on line 1054.\n\n\n**Significance**\nThe paper may be quite significant to the field of biologically-plausible learning in RNNs. One of the important takeaways from the paper is that learning rate modulation of biologically-plausible learning rules may be a key to good performance. This may spur further research into developing better learning rate schedules for biologically-plausible learning rules. Furthermore, this paper may have implications for deep learning theory more generally. This paper considers the generalization gap of different algorithms by roughly fixing their training performance and comparing their test performance. How would the results change if the test performance were fixed and the training performance were varied (by stopping the rules at different points during training, for example)?\n\nDo generalization gaps correlate with Hessian eigenvalues in networks trained with the same rule?\n\nHow would a three-factor rule perform empirically when scaled to match the along-gradient update sizes of gradient descent?\n\nAt what point does numerical instability occur in biologically-plausible learning rules? The authors adequately address the limitations of the work. As the authors note, they do not consider varying architectures and leave the interaction between architecture and generalization as future work to be investigated. Moreover, the authors note that the loss landscape curvature does not fully explain generalization."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
4
] | [
"5BGtUiXWCj7",
"5BGtUiXWCj7",
"5BGtUiXWCj7",
"ZjrJcehtem",
"5Y4DoFNE5uv",
"KC3llYU3e9j",
"nRmWsix2wj",
"2yeyDZ2bdw3",
"0Qn0mtD-iXX",
"3_MC2D5coJM",
"eYCLY_vttPW",
"nips_2022__4xg5moXVg",
"nRmWsix2wj",
"nRmWsix2wj",
"1OK9HSJhi9",
"1OK9HSJhi9",
"2yeyDZ2bdw3",
"2yeyDZ2bdw3",
"0Qn0mtD-iXX",
"0Qn0mtD-iXX",
"nips_2022__4xg5moXVg",
"nips_2022__4xg5moXVg",
"nips_2022__4xg5moXVg",
"nips_2022__4xg5moXVg"
] |
nips_2022_oDWyVsHBzNT | Model-Based Offline Reinforcement Learning with Pessimism-Modulated Dynamics Belief | Model-based offline reinforcement learning (RL) aims to find highly rewarding policy, by leveraging a previously collected static dataset and a dynamics model. While the dynamics model learned through reuse of the static dataset, its generalization ability hopefully promotes policy learning if properly utilized. To that end, several works propose to quantify the uncertainty of predicted dynamics, and explicitly apply it to penalize reward. However, as the dynamics and the reward are intrinsically different factors in context of MDP, characterizing the impact of dynamics uncertainty through reward penalty may incur unexpected tradeoff between model utilization and risk avoidance. In this work, we instead maintain a belief distribution over dynamics, and evaluate/optimize policy through biased sampling from the belief. The sampling procedure, biased towards pessimism, is derived based on an alternating Markov game formulation of offline RL. We formally show that the biased sampling naturally induces an updated dynamics belief with policy-dependent reweighting factor, termed Pessimism-Modulated Dynamics Belief. To improve policy, we devise an iterative regularized policy optimization algorithm for the game, with guarantee of monotonous improvement under certain condition. To make practical, we further devise an offline RL algorithm to approximately find the solution. Empirical results show that the proposed approach achieves state-of-the-art performance on a wide range of benchmark tasks. | Accept | I went through the manuscript, reviews and authors' responses. I think this paper is qualified for NeurIPS publication. | train | [
"ER0gTgELF_F",
"wpIRTIHinzK",
"EFQivrnq3I3",
"-RtVCxVl0r",
"69jVhiNORQI",
"Rt01aVd8VxZ",
"vTW7hmCTPCv",
"-al2f-beuz-",
"AW6qrRJqf7Z",
"vNS1lRbBcAc",
"xKMYXqoWRKh",
"Sd63hkSIfz",
"a8qT8X8NAjw",
"Uuwsmb6R8fj",
"bzEaOVOtqHL",
"HMvhWqJy8rL",
"Vid6s74DSYt"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks very much for your time to further evaluate the value of this work!",
" I have carefully read your new comments. Considering the potential impacts to RL community, I believe this work deserves a higher score. I've raised my score. Congrats to the authors for the remarkable work!",
" We thank all reviewers for the constructive and insightful comments, and we are glad that the raised concerns have been resolved. The comments also help us to make the broader impact explicit, which we want to highlight as below: \n\n- **Bayesian Sides of RL:** RL is previously considered from two types of Bayesian view. 1) Bayesian RL [1] applies belief MDP formulation and updates the belief during deployment to promote better decision. The belief is updated upon new observation, while in offline setting no additional data can be received and the proposed belief update comes from our tendency to conservative policy evaluation/optimization. This work can be deemed as an illustration on how to insert subjective preference to the solved policy without relying on data. 2) RL was recast as approximate posterior inference [2], allowing to integrate the policy learning and the reasoning about compositionality and partial observability in a principled manner. In this work, the proposed approach is elaborately recast as EM algorithm with structured posterior inference (in Appendix C). This enables the extension of the related research achievements in online setting to the offline setting, such as hierarchical reinforcement learning [3] and partially observed reinforcement learning [4] based on \"RL as inference\". \n\n- **Bayesian Decision Theory:** Decision making under uncertainty is a broad and long-standing research topic. Various criterions regarding robustness or conservativeness have been proposed, such as value at risk (VaR) and conditional value at risk (CVaR). In context of RL, robust MDP is a simplified version of VaR. The criterions of VaR and CVaR only focus on the pessimistic performance and ignore the others. In this work, we propose an alternative criterion, which tackles the entire spectrum of plausible transitions while also gives more attention on the pessimistic parts. Although the recent works [5,6] also try to avoid the excessive fixation on pessimism, their formulation is proven NP-hard or the approach relies on heuristic. Apart from RL, we believe this criterion also adapts well to one-step bandit problem (by pessimistically sampling the reward belief).\n\n- **Knowledge Insertion for Data-Limited Policy Optimization:** In offline setting, extra knowledge is strongly desired to further optimize policy. The proposed approach provides an interface to absorb the aforehand knowledge of system transition. On one hand, with richer knowledge, the performance evaluation is more exact and the optimization is away from excessive conservatism. On the other hand, the knowledge can help tuning hyper-parameter with treating the AMG performance as surrogate (as discussed in A2 to Reviewer 998y). However, the knowledge is not easily accessed in general. We hope this work can inspire more research interests on learning data-drivien knowledge from related offline datasets or similar online tasks.\n\nPlease kindly let us know if you have any question about these. We thank again the reviewers for the positive comments, and appreciate it if the reviewers think the work deserves higher score.\n\n[1] Ghavamzadeh, M., et al. Bayesian reinforcement learning: A survey. Foundations and Trends in Machine Learning, 2015.\\\n[2] Levine, S.. Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv, 2018.\\\n[3] Haarnoja, T., et al. Latent space policies for hierarchical reinforcement learning. ICML, 2018.\\\n[4] Lee, Alex X., et al. Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model. NeurIPS, 2020.\\\n[5] Lobo, E. A., et al. Soft-Robust Algorithms for Batch Reinforcement Learning. arXiv, 2021.\\\n[6] Rigter, M., et al. Planning for Risk-Aversion and Expected Value in MDPs. AAAI 2022.",
" Your response to my comments is reasonable and objective. Congrats on a nice paper!",
" We are glad to have addressed your concern. Thanks for the apprecation of this work and raising the score!",
" Dear authors, thanks for the thorough answer to my concerns. I think all of my questions were answered, I will update my score accordingly an recommend acceptance.",
" I think all my concerns were well addressed in the rebuttal. I will keep my score unchanged. Congratulations to the authors for both the nice emprical and theoretical results!",
" Compared to the standard Bellman backup operator in Q-learning, the proposed one additionally includes the expectation over $\\mathcal{T}\\sim\\mathcal{P}_T^N$ and the $k$-minimum operator over $\\tau\\in\\mathcal{T}$. We report the impact of choosing different $k$ in Table 2, and present the impact of the randomness of $\\mathcal{T}$ as below. Fixed $\\mathcal{T}$ denotes that we sample $\\mathcal{T}$ from the belief distribution and then keep fixed during policy optimization.\n\nTask Name| Stochastic $\\mathcal{T}$| Fixed $\\mathcal{T}$\n-|-|-\nhopper-medium | 106.8$\\pm$0.2 | 106.2$\\pm$0.3\nwalker2d-medium | 94.2$\\pm$1.1 | 90.1 $\\pm$4.3\nhalfcheetah-medium | 75.6$\\pm$1.3 | 73.1$\\pm$ 2.8\n\nWe observe that the randomness of $\\mathcal{T}$ has a mild effect on the performance in average. The reason can be that we apply the uniform distribution over dynamics ensemble as initial belief (without additional knowledge to insert). The model ensemble is reported to produce low uncertainty estimation in distribution of data coverage and high estimation when departing the dataset [1]. This property makes the optimized policy keep close to the dataset, and it does not rely on the randomness of ensemble elements. However, involving the randomness could lead to more smooth variation of the estimated uncertainty, which benefits the training process and results in better performance. Apart from these empirical results, we highlight that in cases with more informative dynamics belief, only picking several fixed samples from the belief distribution as $\\mathcal{T}$ will result in the loss of knowledge.\n\n[1] Lakshminarayanan, B., et al. Simple and scalable predictive uncertainty estimation using deep ensembles. NeurIPS 2017. \n",
" Thanks for your insightful comments. We provide clarification to your questions and concerns as below.\n\n**Q1: How to construct a sufficient belief without expert knowledge for an arbitrary task?**\n\nA1: In cases where expert knowledge is unavailable, the best we can hope for is that the final policy stays close to the dataset, but unnecessary to be fully covered (as we want to utilize the generalization ability of dynamics model at least around the data). To that end, the dynamics belief is desired to be certain at the region in distribution of dataset, and turns more and more uncertain when departing. It has been reported that the simple model ensemble leads to such a behavior, for example Figure 1 in [1]. In this sense, the uniform distribution over learned dynamics ensemble can act as a quite common belief. For small dataset, Gaussian process with smooth kernel is also a good choice to achieve this. We will add such explanation in the paper. \n\n[1] Lakshminarayanan, B., et al. Simple and scalable predictive uncertainty estimation using deep ensembles. NeurIPS 2017.\n\n**Q2: How to tune the hyper parameter $k$ and $N$ when processing a new task?**\n\nA2: The performance in AMG can be treated as surrogate to tune $k$ and $N$ offline. To be concrete, we set an anticipation on how the performance is after offline optimization, according to the statistics of the per-trajectory returns contained by the dataset, or other knowledge of the task and dataset. With the monotonicities of $k$ and $N$, they can be tuned such that the achieved performance in AMG is close to the anticipated performance. As we have a reasonable anticipation, the performance of AMG hopefully serves as a good surrogate to the real performance, like that in Figure 2, Tables 2 and 4.\n\nWe believe that the hyper-parameter tuning without online access is challenging for any offline algorithm. Apart from introducing offline surrogate, the robustness of hyper-parameter is also crucial. In this paper, we empirically verify the robustness of $k$ and $N$, i.e., the results in Table 1 keeps the same $k$ and $N$ over all tasks, changing $k$ and $N$ still retains comparable/competitive performance in Tables 2 and 4. Concurrently, the algorithmic robustness is also explored in the ICML'22 outstanding paper [2].\n\n[2] Cheng, C. A., et al. Adversarially trained actor critic for offline reinforcement learning. ICML 22.\n\n**Q3: How to determine the weights in Eqn. (11)? The optimization over the Bellman residual of both the AMF and the empirical MDP is applied with the same weight (both are 1).**\n\nA3: We add experiments to check the impact of different weights:\nTask Name|0.5:1.5| 1.0:1.0 |1.5:0.5\n-|-|-|-\nhopper-medium | 106.6$\\pm$0.3 | 106.8$\\pm$0.2 | 106.5$\\pm$0.3\nwalker2d-medium| 93.8$\\pm$1.5 | 94.2$\\pm$1.1 | 93.1$\\pm$1.3\nhalfcheetah-medium| 75.2$\\pm$0.8 | 75.6$\\pm$1.3 | 76.1$\\pm$1.0\n\nIn the above, the performance does not obviously depend on the weights. But in cases where expert knowledge about dynamics is available, we would like to insert it into dynamics belief. The weights can be adjusted to match our confidence on the knowledge, i.e., the less confidence, the smaller weight for AMG.\n\n**Weakness: The experimental results can be made more comprehensive. In Figures 2 and 3, what is the possible cause of the large dip in half-cheetah-medium large dip in halfcheetah-medium task?**\n\nA4: We will add the experimental results mentioned in A3, [A3 to Reviewer y5WY] and [A4 to Reviewer LbFj] to the main paper.\n\nIn Figure 3, we notice that the large dip of Q-value is accompanied with the sudden raise of dynamics uncertainty. We suspect this is due to the optimized policy being far from the dataset. We try to verify by checking the maximal return covered by the offline dataset. It shows that the maximal normalized returns provided by offline datasets are 99.6, 92.0 and 45.0 respectively for hopper, walker2d and halfcheetah, while the proposed approach achieves 106.8, 94.2 and 75.6. The policy optimization is more significant for halfcheetah (where we observe the large dip), indicating the policy should move further from the dataset. \n\nThe above finding also explains why the AMG performance in Figure 3 runs into large dip only for halfcheetah: with the larger dynamics uncertainty, the secondary player can choose the more pessimistic transition. However, we want to highlight that it is normal behavior of the proposed algorithm and does not mean instability, as we are handling the alternating Markov game, a specialization of zero-sum game. Besides, we can see that even when the AMG performance goes down the MDP performance is still stable.\n\n**Weakness: Eqn. (18) undefined**\n\nA5: (18) will be replaced by (7). Thanks for pointing it out.",
" Thanks for your constructive comments. We provide clarification to your questions and concerns as below.\n\n**Q1: 295: if you assume online-access here, do you break offline RL assumptions? In that case, the core assumptions of the scenario should be clarified here. This would make the comparison between this algorithm and fully offline methods slightly less convincing, if you assume i.e. a few-shot scenario.**\n\nA1: The on-policy data is collected in the constructed AMG, thus no assumption of online access is needed. In experiment, the proposed algorithm is examined with solely using the offline data. We will revise the misleading description in the paper.\n\n**Q2: Theorem 2: is that MDP stationary? It depends on pi, so that might make things more complicated? If the policy has to be learned in a changing MDP, how does that affect learning? Theorem 1 states that the operator is a contractive mapping, but**\n\nA2: The equivalent MDP depends on $\\pi$ and indeed changes during policy optimization. Theorem 2 is from view of connecting the proposed formulation with the original problem, but not from the algorithmic view. When dealing with policy optimization in Section 4, we do not rely on the equivalent MDP to devise algorithm. Theorem 1 is about policy evaluation and considers fixed policy, while Theorems 4 and 5 are about policy optimization. They show that the policy improvement is convergent, and with the converged policy we can recall Theorem 2 to obtain the equivalent MDP.\n\n**Q3: (14) why not KL as difference of distributions, or Wasserstein? Is there a specific reason for this concrete formulation?**\n\nA3: We intentionally choose KL as the similarity measure between probabilities. The reason is that it can be tractably and unbiasedly estimated via Monte Carlo sampling $a\\sim\\pi_{\\phi'}$. It would be interesting to explore other possible choices, such as Wasserstein distance or total variation distance, but we want to leave it for other works.\n\n**Weakness: Related works**\n\nA4: We will move the related works in Appendix A to the main paper, and also include the following discussion.\n- Bayesian Robust RL\n * Bayesian Robust RL [1] is based on the problem setting of Bayesian RL, where new observations are continually received and utilized to make better decision. The goal of Bayesian RL is to fast **explore and adapt** when deployed in the environment which is pre-considered during training process. As comparison, offline RL focuses on how to sufficiently **exploit** the offline dataset to generate the best-effort policy supported by the dataset. Bayesian RL is sometimes recast as belief MDP where the belief is updated upon the new observation, however in our work the dynamics belief is updated to conservatively evaluate policy (we build connection between the belief update and approximate Bayesian inference in Appendix C). It would be interesting to integrate the two types of belief update for the downstream topic, i.e., online adaptation of offline RL.\n * Bayesian Robust RL [1] considers the robustness by resorting to robust MDP, where the uncertainty set is defined as a L1-norm ball. One contribution there is that the uncertainty set will be updated upon new observations to alleviate the degree of conservativeness. In contrast, our considered AMG is devised to avoid the disadvantages of robust MDP. In this sense, the contributions are orthogonal.\n- CVaR-style algorithms\n * All of CVaR, robust MDP and our proposed criterion can be deemed as the specializations of Bayesian decision theory, however they are derived from different principles and with different properties. Robust MDP purely focuses on the quantile performance, and ignoring the other possibilities is reported ([25-27] in the paper) to produce over-conservative behavior. CVaR instead considers the average performance of the worst $\\delta$-fraction possibilities. Although CVaR involves more information about the stochasticity, it is still solely from the pessimistic view. Recent works propose to improve by maximizing the convex combination of mean performance and CVaR [3], or maximizing mean performance under CVaR constraint [4]. However, they are intractable regarding policy optimization, i.e., proved as an NP-hard problem or relying on heuristic. The AMG formulation presents an alternative way to tackle the entire spectrum of plausible transitions while also give more attention on the pessimistic parts. Besides, the policy optimization is with theoretical guarantee.\n * Apart from the difference of criterion, [2] also considers the setting of Bayesian RL.\n\n[1] Derman, E., et al. A Bayesian approach to robust reinforcement learning. UAI 2020.\\\n[2] Rigter, M., et al. Risk-averse Bayes-adaptive reinforcement learning. NeurIPS 2021.\\\n[3] Lobo, E. A., et al. Soft-Robust Algorithms for Batch Reinforcement Learning. arXiv, 2021.\\\n[4] Rigter, M., et al. Planning for Risk-Aversion and Expected Value in MDPs. AAAI 2022.",
" **Weakness: The set generating mechanism and definitions in (3) could be explained better. $\\mathcal{T}$ seems to refer to a set of transitions, but $\\bar{s}$ is a state (or a single transition)? This seems to be explained in the text under the equations, I think it might be clearer for readers if the state and action spaces of the 2nd player was fully explained in the introduction of the formalism.**\n\nA5: $\\bar{s}$ is the state for secondary player in general AMG. After introducing the general AMG, we instantiate it in the setting of offline RL, where $\\bar{s}=\\mathcal{T}$ means that the state of secondary player is the set of plausible transitions. Thanks for your suggestion, we will try best to improve.\n\n**Unclear lines/grammatical errors**\n\nA6: We will fix them in the paper.\n\n**Limitations: A brief discussion is given, without mentioning any major drawbacks in depth.**\n\nA7: The initial belief provides the interface to insert additional knowledge, but also introduces potential risks: 1) when inserting incorrect or biased knowledge, the optimization procedure would be misled and the reality gap can be amplified; 2) when considering data-driven approach to learn extra knowledge from multi-task datasets, it is not straightforward to devise a principled criterion on similarity measurement between the concerned task and the multiple tasks. This brings challenge to insert knowledge for arbitrary tasks. However, with only offline dataset, it is always expected to integrate other knowledge to obtain significant policy improvement. We hope highlighting these inspires more research on this topic.",
" Thanks for your constructive comments. We provide clarification to your questions and concerns as below.\n\n**Q1: The updating of game transitions/dynamics beliefs**\n\nA1: To clarify, let's distinguish the game transition $G$ in AMG and the system transition $T$ in MDP. The game transition $G$ is kept fixed throughout. Theorem 2 states that given an initial belief distribution over $T$, the conservative evaluation produces an updated belief distribution, still over $T$. The belief update is introduced to explain how the AMG connects with MDP, and not truly executed on the algorithmic level. When dealing with policy optimization in Section 4, we do not rely on the equivalent MDP to devise algorithm.\n\nIn Theorem 2, the new belief is obtained via reweighting initial belief. The reweighting factor for system transition $\\color{blue}\\tau^{sa}$ depends on the value of $\\mathbb{E}_{{\\color{blue}\\tau^{sa}},\\pi}\\big[Q_\\{N,k\\}^{\\pi}(s',a')\\big]$. This term can be regarded as a pessimism indicator for $\\tau^{sa}$, as it predicts how is the performance if the system transits following $\\tau^{sa}$. Note that $\\tau^{sa}$ itself is random following the belief distribution, then $\\mathbb{E}_\\{\\tau^{sa},\\pi\\}\\big[Q_\\{N,k\\}^{\\pi}(s',a')\\big]$, as a functional of $\\tau^{sa}$, is also random. Thus, we can define its cumulative density function, i.e., $F\\left(\\mathbb{E}_\\{ \\tau^{sa}, \\pi\\}\\big[Q_\\{N, k\\}^{\\pi}(s',a')\\big]\\right)$. \nThe interesting thing is that the reweighting factor achieves maximum for $\\tau^*:F\\left(\\mathbb{E}_\\{\\tau^*,\\pi\\}\\big[Q^\\pi_\\{N,k\\}(s',a')\\big]\\right)=\\frac{k-1}{N-1}$, i.e., the transition with $\\frac{k-1}{N-1}$-quantile pessimism indicator. Besides, when $\\mathbb{E}_\\{\\tau^{sa}, \\pi\\}\\big[ Q^\\pi_\\{N,k\\}(s',a')\\big]$ departs the $\\frac{k-1}{N-1}$ quantile, the reweighting coefficient for its $\\tau^{sa}$ decreases. In this way, the new belief is reshaped towards concentrating around the $\\frac{k-1}{N-1}$ quantile. \n\n**Q2: Inserting additional knowledge, scenarios and expected performance gain.**\n\nA2: The additional knowledge can be inserted by pre-defining more informative initial belief. For example,\n- Consider the physical system where the dynamics can be described as mathematical expression but with uncertain parameter. If we have a narrow distribution over the parameter (according to expert knowledge or inferred from data), the system is almost known for certain. Here, both the mathematical expression and narrow distribution provide more information.\n- Consider the case where we know the dynamics is smooth with probability of 0.7 and periodic with probability of 0.3. Gaussian processes (GPs) with RBF kernel and periodic kernel can well encode these prior knowledges. Then, the 0.7-0.3 mixture of the two GPs trained with offline data can act as the dynamics belief to provide more information.\n- In the case where multi-task datasets are available, we can train dynamics models using each of them and assign likelihood ratios to them. If the likelihood ratio well reflects the similarity between the concerned task and the offline tasks, the multi-task datasets promote knowledge.\n\nThe performance gain is expected to monotonously increase with the amount of correct knowledge. As an impractical but intuitive example, with the exact knowledge of system transition (the initial belief is a delta function), the proposed approach is actually optimizing policy as in real system.\n\n**Q3: Stability of training process**\n\nA3: We are not sure whether the concern is due to ''the update of game transition\" or the adversarial-style problem formulation. We clarify that the game transition is fixed in A1, the concern could be resolved for the first case. Regarding the general adversarial training, the vanilla gradient-based method does suffer from instability, especially reported in GAN-related works. However, our formulated problem is solved based on the contraction mapping, and it is theoretically guaranteed to converge for fixed reference policy (Theorem 4). For periodically updated reference policy, Theorem 5 gives the condition of monotonous improvement, which is easily satisfied by considering Theorem 4. Our practical algorithm applies slow-evolving reference policy. In the experiment, we keep the hyper-parameter fixed, and do not see unstable behavior regarding online evaluation. We will release code in short future such that it can be verified.\n\n**Ablation experiment**\n\nA4: The core part is formulating AMG to characterize the impact of dynamics uncertainty, but indeed we can do ablation study on the elements of AMG. The impact of whether to choose the worst dynamics is reported in Table 2. Currently, we are doing experiments about how the randomness of candidate set affects, and will post it once done.\n\n**Presentation in Section 3.1**\n\nA5: We will try best to simplify the symbol and formulation in Section 3.1.\n\n**Limitation**\n\nA6: We will add the discussion in A2 to main paper, and include the ablation experiment.",
" Thanks for your insightful comments. We provide clarification to your questions and concerns as below.\n\n**Q1: How does the slow-evolving policy update related to TRPO/PPO style KL-constrained policy optimisation?** \n\nA1: They both emphasize slow policy update, but come from different derivations and different motivations. As PPO can be regarded as a simplified variant of TRPO, we discuss TRPO in the following. \n\nThe slow-evolving policy update produces $\\pi_{\\phi'}$, where $\\phi' \\leftarrow \\omega_1\\phi + (1-\\omega_1)\\phi'$, and $\\pi_{\\phi}$ is the policy being optimized to maximize $\\bar{J}(\\pi_{\\phi};\\pi_{\\phi’})$ in equation (9). Its differences to TRPO regarding derivation are as follows\n- Compared to TRPO which treats $D_\\text{KL}(\\pi_\\text{old}||\\pi)\\leq \\epsilon$ as constraint, $\\bar{J}(\\pi_{\\phi};\\pi_{\\phi’})$ involves $\\mathbb{E}\\left[D_\\text{KL}(\\pi_{\\phi}||\\pi_{\\phi’})\\right]$ as regularizer. Note that the expectation is over the state distribution induced by $\\pi_{\\phi}$ in AMG. Then, during the maximization of $\\bar{J}(\\pi_{\\phi};\\pi_{\\phi’})$, the regularizer has two effects: keep $\\pi_{\\phi}$ close to the reference policy; encourage $\\pi_{\\phi}$ to generate the states where the KL term is small. In this way, $\\pi_{\\phi}$ is optimized with the consideration of long-term KL regularization. In contrast, the KL constraint in TRPO is not directly affected by the state distribution. This difference about KL regularization/constraint also exists between TRPO and soft actor critic. \n- $\\pi_{\\phi’}$ is a slow-changing version of $\\pi_{\\phi}$, while in TRPO $\\pi_\\text{old}$ is updated as $\\pi$ after collecting new data. In our initial experiments, we found the soft update results in faster learning. \n\nRegarding the motivation, TRPO optimizes a first-order approximation to the expected return, and $\\pi$ is constrained close to $\\pi_\\text{old}$ so that the approximation is exact enough to truly improve performance. In our approach, maximizing $\\bar{J}(\\pi_{\\phi};\\pi_{\\phi’})$ is actually a bi-level problem, where the outer problem is to improve $\\pi_\\phi$ and the inner problem is to conservatively evaluate policy. Obviously, without sufficient evaluation, the policy improvement will be misled. To avoid this, the KL regularizer restricts $\\pi_\\phi$ in a small region near $\\pi_{\\phi'}$, such that these policies can be evaluated sufficiently with limited computation before improvement. The delay between $\\pi_\\phi$ and $\\pi_{\\phi'}$ is introduced to provide the time window for sufficiently conservative evaluation.\n\n**Q2: It would be interesting to understand more the motivation behind the design of the optimization procedure in Section 4. How inefficient is solving the original problem, what is the overall gain in training speed by using the reference policy?**\n\nA2: We explain the motivation in A1, and here elaborate why the proposed approach is more efficient than solving the original problem. In general, for the bi-level problem $\\max_x \\min_y f(x,y)$, a reliable approach is first finding the optimum for inner problem and then taking a single gradient update to the outer variable, i.e., $x\\rightarrow x + lr\\cdot \\nabla_x f(x,y^*)$. In this way, every solved inner problem only contributes a single gradient step to the outer problem. In the proposed approach, we instead constrain the policy in a small region via KL regularizer, and it hopefully produces a more significant policy update after solving each problem regarding $\\bar{J}$. \n\nThis idea can be compared with that of TRPO. With a batch of new collected data, TRPO applies KL constraint to improve policy more than a single step of vanilla policy gradient. The essential goal is to reduce sample cost. In our setting, the goal is to reduce computational cost. In the initial experiments, we tried alternating policy update (i.e., the policies of primary and secondary players) to solve the original problem, the performance improves stably only with extreme small learning rate for the primary policy, and it is right the inefficiency that motivates us to consider the KL-regularized version.\n\n**Related work: [1] considers a related formulation which also induces pessimistic dynamics. It would be valuable to compare these works, but I appreciate this work appeared after the ICML deadline.**\n\nA3: We will add the full comparison in the paper. Currently, at least for the 12 tasks overlapping with our experiment, the proposed approach achieves an obvious gain (average score: 79.9 with consistent hyper-parameter v.s. 67.7 with tuning hyper-parameter per task). We conject one reason is that the problem formulation in [1] is based on robust MDP.\n\n**Weakness & Limitation**\n\nA4: We will try best to improve the presentation. We will move the related works to main paper, and include the discussion about efficiency (like A2).\n\n**Typos and D4RL detail**\n\nA5: Thanks for pointing them out. We will fix the typos and state the version of D4RL dataset (v2).",
" The paper proposes an Alternating Markov Game formulation of offline RL which induces a MDP with pessimistic dynamics. The authors derive an approximate solution to this game which requires no uncertainty penalization and show strong results on the D4RL MuJoCo benchmark. Strengths:\n- Strong theoretical background including derivation of connection between AMG and the equivalent MDP, well-related to standard offline MBRL in the $N=k=1$ case.\n- Strong empirical results on the D4RL MuJoCo benchmark. The offline evaluation is thorough uses a single hyperparameter setup across all environments, thus avoiding using online samples to tune performance for future deployment.\n\nWeaknesses:\n- Presentation is sometimes dense, related work should be discussed in the main paper.\n\nMinor:\n- Typo on line 22, 345: trial instead of trail.\n- The version of D4RL dataset used in the main evaluation should be highlighted, there are performance differences between v0-v2.\n- Section link in line 498 broken.\n- [1] considers a related formulation which also induces pessimistic dynamics. It would be valuable to compare these works, but I appreciate this work appeared after the ICML deadline.\n\n[1] Rambo-rl: Robust adversarial model-based offline reinforcement learning. M Rigter, B Lacerda, N Hawes.\n - How does the slow-evolving policy update relate to TRPO/PPO style KL-constrained policy optimisation?\n- It would be interesting to understand more the motivation behind the design of the optimization procedure in Section 4. How inefficient is solving the original problem, what is the overall gain in training speed by using the reference policy?\n Efficiency of the algorithm should be detailed in the main paper, as well as discussion of related work.",
" This paper points out that characterizing the impact of dynamics uncertainty through reward penalty may incur unexpected over-conservative behaviors. To overcome this problem in existing model-based offline RL works, the paper proposes to maintain a belief distribution over dynamics belief and optimize policy via biased sampling from the distribution. The sampling procedure is derived based on an AMG formulation of offline RL. For practical consideration, an offline RL approach named PMDB is further designed. Results on the D4RL benchmark show the state-of-the-art performance of PMDB. Strengths: \n* Motivation: The paper points out an important while has been ignored problem in model-based offline RL, that it’s unsuitable to directly characterize the dynamic uncertainty by reward penalty. \n* Theoretical Contribution: To solve the problem, the paper formulates the offline RL problem as an AMG and discusses the relationship between AMG and robust MDP. Throughout the formulation, several interesting theoretical results are provided. The illustration that AMG is a successive interpolation between model-based RL and robust MDP provides an intuitive explanation of the theoretical results.\n* Empirical Contribution: The paper verifies the proposed PMDB method on D4RL and shows the SOTA performance without the need to tune hyperparameters for each task. \n\nWeaknesses:\n* It may be due to the complexity of the overall framework, the paper is somewhat hard to follow. It took me a very long time to understand the symbols and formulations defined in Section 3.1. \n* Although the framework is very complicated, it seems the core parts that work are: (1) dynamically adjusting the game transitions in each training iteration; (2) Updating the Q-function based on the K-percentile dynamics instead of the worst one. While with the theoretical proof, it would be better to do ablations on the above two parts to further illustrate the effectiveness of the designs.\n 1. I’m not sure if I actually understand the updating of game transitions/dynamics beliefs. Could you please explain it to me in detail?\n2. PMDB relies on the initial dynamics belief and the paper uses a uniform distribution over dynamics as the initial belief. Do you mean inserting additional knowledge into the initial dynamics belief is to pre-define an initial dynamic belief? If so, could you give some examples of the scenarios in which we obtain priors on dynamic belief? If involved the expert knowledge, how much will the performance of the algorithm \n3. I’m concerned about whether the training of PMDB is stable considering the design of this framework?\n\n The paper discussed the limitation that expert knowledge is not always available thus it’s hard to manually pre-define the dynamics belief. Besides, I think another limitation is the lack of ablations on several key components. ",
" The paper proposes a mechanism for offline RL based on a two player game with a Bayesian environment, in which an RL actor picks a policy and a environment actor picks environment transitions from a likely set pessimistically. The authors show general convergence and existence results in their framework and empirical results based on the established D4RL framework. == Strengths ==\n- The core theory is well laid out and supported with theoretical results as well as empirical investigation.\n- The structure is overall well laid out and easy to follow. Some minor points of confusion and suggestions are discussed below, but in general, the paper is presented in a clear legible fashion.\n\n== Weaknesses ==\n- The paper is missing a clear related work section. This makes it harder to position it in the relevant literature and makes it difficult to assess whether concurrent similar strands of the literature were reviewed. Since I am not an expert in the concrete field the authors are tackling, some of the following concerns might be wrong or missing relevant details, but I will list them for discussion:\n - My biggest concern is that no other works on Bayesian Robust RL were cited. The formulation the authors present seems novel, but to the best of my knowledge several other works have tackled Bayesian Robust RL and a clear comparison would greatly improve the paper. I.e. how does the proposed method relate to https://arxiv.org/abs/1905.08188 and similar works.\n - The other strand of research I am missing comparison to is CVaR style algorithms, which seem similar in its setup in targeting a quantile of returns. For example, this paper from NeurIPS 2021 seems very related https://papers.nips.cc/paper/2021/hash/08f90c1a417155361a5c4b8d297e0d78-Abstract.html\n- The set generating mechanism and definitions in (3) could be explained better. Big $\\tau$ seems to refer to a set of transitions, but $\\bar{s}$ is a state (or a single transition)? This seems to be explained in the text under the equations, I think it might be clearer for readers if the state and action spaces of the 2nd player was fully explained in the introduction of the formalism.\n\nUnclear lines/grammatical errors:\n- Line 51: with entirely different implication compared to the reward or Q-value?\n- Line 140: To differentiate with uncertainty set we call it candidate set, but to avoid redundancy we reuse the notation of T?\n- Line 24: in vast of applications? - 295: if you assume online-access here, do you break offline RL assumptions? In that case, the core assumptions of the scenario should be clarified here. This would make the comparison between this algorithm and fully offline methods slightly less convincing, if you assume i.e. a few-shot scenario.\n- Theorem 2: is that MDP stationary? It depends on pi, so that might make things more complicated? If the policy has to be learned in a changing MDP, how does that affect learning? Theorem 1 states that the operator is a contractive mapping, but \n- (14) why not KL as difference of distributions, or Wasserstein? Is there a specific reason for this concrete formulation? A brief discussion is given, without mentioning any major drawbacks in depth.",
" Existing studies on model-based offline RL often rely on the quantification of the uncertainty in the learned dynamics and utilize it to regularize the reward. One drawback of this approach is that it may result in unreliable evaluation on the impact of the uncertainty and incur unexpected tradeoff between model utilization and risk avoidance. To resolve this issue, the authors introduced a novel model-based offline RL algorithm with pessimism-modulated dynamics belief, where the uncertainty is not explicitly quantified but a belief distribution over system dynamics is used for sampling instead, when updating the policy evaluation and policy update. The sampling procedure is cleverly modeled as an alternating Markov game and the degree of pessimism can be determined by the hyper parameters in the sampling procedure. The proposed offline RL algorithm solves the AMG by iterative regularized policy optimization with monotonous improvement guarantee. The empirical results on D4RL shows the efficiency and performance improvement of the proposed algorithm. Strength:\n+ The proposed methodology is explained in detail and the paper is easy to follow. The empirical results and ablation studies shwocase the efficiency of the proposed approach on the standard benchmarks. \n\n+ The formulation of offline RL as AMG is interesting and leads to the development of pessimism-modulated dynamics belief derivation.\n\n+ The theoretical results on the pessimism-modulated dynamics belief is promising and serves good guidance on the empirical offline RL algorithm design. \n\nWeakness:\n- The experimental results can be made more comprehensive. In Figures 2 and 3, what is the possible cause of the large dip in half-cheetah-medium large dip in halfcheetah-medium task?\n\n- How to tune the hyper parameter $k$ and $N$ when processing a new task? \n\n- How to determine the weights in Eqn. (11)? The optimization over the Bellman residual of both the AMF and the empirical MDP ia applied with the same weight (both are 1).\n\n- Line 194, the reference to Eqn. (18) is undefined.\n How to construct a sufficient belief without expert knowledge for an arbitrary task? How to tune the hyper parameter $k$ and $N$ when processing a new task? \n\nHow to determine the weights in Eqn. (11)? The optimization over the Bellman residual of both the AMF and the empirical MDP ia applied with the same weight (both are 1). One limitation of this work is that is is unclear a prior how to obtain a sufficient belief without expert knowledge. It will be of interest to shed light on how to construct a valuable (initial) dynamic belief for arbitrary tasks."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
4
] | [
"wpIRTIHinzK",
"EFQivrnq3I3",
"nips_2022_oDWyVsHBzNT",
"AW6qrRJqf7Z",
"Rt01aVd8VxZ",
"xKMYXqoWRKh",
"-al2f-beuz-",
"Sd63hkSIfz",
"Vid6s74DSYt",
"HMvhWqJy8rL",
"HMvhWqJy8rL",
"bzEaOVOtqHL",
"Uuwsmb6R8fj",
"nips_2022_oDWyVsHBzNT",
"nips_2022_oDWyVsHBzNT",
"nips_2022_oDWyVsHBzNT",
"nips_2022_oDWyVsHBzNT"
] |
nips_2022_mNtFhoNRr4i | Hierarchical classification at multiple operating points | Many classification problems consider classes that form a hierarchy. Classifiers that are aware of this hierarchy may be able to make confident predictions at a coarse level despite being uncertain at the fine-grained level. While it is generally possible to vary the granularity of predictions using a threshold at inference time, most contemporary work considers only leaf-node prediction, and almost no prior work has compared methods at multiple operating points. We present an efficient algorithm to produce operating characteristic curves for any method that assigns a score to every class in the hierarchy. Applying this technique to evaluate existing methods reveals that top-down classifiers are dominated by a naive flat softmax classifier across the entire operating range. We further propose two novel loss functions and show that a soft variant of the structured hinge loss is able to significantly outperform the flat baseline. Finally, we investigate the poor accuracy of top-down classifiers and demonstrate that they perform relatively well on unseen classes. | Accept | The submission benchmarks several hierarchical classification techniques showing that flat softmax on the leaf nodes dominates most methods. This is quite a negative result for previous works, indicating that efforts on hierarchical classification losses are often not leading to better results. The authors introduce a loss function that does appear to result in performance beating the flat classification baseline. The writing style flows, there is a reasonably good review of the literature, and results appear to give some insights into methods for performing and evaluating hierarchical classification methods, which are complementary to the existing literature on a problem that has been studied in various forms for decades. The reviewers were unanimous in their opinion that the submission is just over the threshold for acceptance at NeurIPS after the rebuttal process. | train | [
"An_1g8accfa",
"SmVPvBM0v81",
"WCO_3hvuALs",
"Wn3jgW7C9Wx",
"xSWlXLgU0D8",
"DxAvIICF8kG",
"P3H-oJDlYsr",
"DhV-tDMmUT6",
"__g2hw7B1YG",
"-jBTV0Z59ma",
"lUOunTrhFiE"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This helps with the understanding",
" We have uploaded a revised version of the paper to address the main requests. Here is a summary of the changes.\n\n* Change paragraph 2 of Introduction to make motivation of curves more explicit\n* Add bullet point contributions at end of introduction\n* Related work: Remove citations that are not directly related; make connection to this paper more explicit\n* Revise Section 5 for clarity; use piecewise-continuous functions and add equation showing summations\n* Explain meaning of particular C(y, y') for Conditional Risk Minimization (Inference functions) and for soft-max-margin (Loss functions)\n* Fix mistakes in definition of C(y, y')\n* Re-word \"general approach\" in context of scaling to many classes (Conclusion)\n* Re-word \"logical modes of failure\" and \"excessive trust\" (Social impact)\n\nAnd in the supplementary material there are two new appendix sections:\n\n* Table giving all parameterizations and their different properties\n* Two algorithmic blocks for Section 5",
" Thanks for addressing most of my main concerns, reading the rebuttal and the other reviewers reponses I got a clearer picture of what the contributions are for this work. While the experiments, tradeoffs, and the thought experiments seem interesting, the results are not that impressive seeing how close (and sometimes worse) than Deep RTC and Softmax Flat, so there seems to be a big room for improvement. However, I will raise my score to a 6 \"Weak Accept\", because the work seems interesting enough to be published at a conference like NeurIPS.",
" ### Algorithmic blocks\n\nThe first algorithm, `OrderedParetoSet`, is not claimed as a novel contribution; it was introduced in Algorithm 3.1 of [Kung et al. (1975)](http://www.eecs.harvard.edu/~htk/publication/1975-jacm-kung-luccio-preparata.pdf).\nHowever, this ordered set forms the basis of our own algorithm, and we therefore include it for understanding.\n\nWhereas the original algorithm made the simplifying assumption that no two vectors have the same x or y value, we present a variant that supports duplicate values in either co-ordinate.\nThis is important in our case, since it is common for many nodes in the hierarchy to have the same information (e.g. all leaf nodes).\n\nType annotations: We use `[float]` to denote a list of floats (and thus `[[float]]` to denote a list of lists of floats).\n\n```\nOrderedParetoSet(x: [float], y: [float])\n // Returns list of indices defining the Pareto set,\n // ordered such that x is decreasing and y is increasing.\n // Require: len(x) = len(y) = number of points n\n n <- len(x)\n pi <- argsort(y, 'desc')\n x, y <- x[pi], y[pi]\n rho <- argsort(x, 'desc') // Must be stable sort\n x, y, pi <- x[rho], y[rho], pi[rho]\n // Have lexicographic order:\n // if i < j, then x[i] ≥ x[j] (and if x[i] = x[j], then y[i] ≥ y[j]).\n // Therefore element j is dominated iff ∃ i < j with y[i] > y[j]\n y_max <- cummax(y) // Like cumsum\n subset <- [j in 0..n-1 where j = 0 or y[j] > y_max[j-1]]\n pi <- pi[subset]\n return pi\n```\n\nThe second algorithm, `ConstructDatasetCurve`, takes a dataset of examples with groundtruth label `y[i]` and predicted distribution `p[i]` (a vector of probabilities) and returns the piecewise-constant function that results as the threshold is decreased from 1 to `tau_min`.\n```\nConstructDatasetCurve(y: [int], p: [[float]], tau_min: float)\n // Returns thresholds C and metric values Z defining piecewise-constant curve.\n // C is monotonically decreasing with C[0] = 1, C[-1] = tau_min.\n // Metric value is Z[j] for C[j] ≥ tau > C[j+1].\n // Inputs: For example i, groundtruth label y[i] and probability vector p[i],\n // minimum threshold min_tau.\n // Require: len(y) = len(p) = number of examples N\n // Require: len(p[i]) = number of nodes Y\n // Require: y[i] in [0, Y), p[i][y] in [0, 1], tau_min in [0, 1)\n // Require: p[i][0] = 1 (root node)\n // External: Constant vector I[y] with len(I) = Y\n // External: Function M(y, y_hat) that evaluates metric\n N <- len(y)\n Z_0 <- 0\n C <- []\n Delta <- []\n for i in 0..N-1 // For each example\n subset <- [y in 0..Y-1 where p[i][y] > tau_min] // Nodes that satisfy min threshold\n pi <- OrderedParetoSet(p[i][subset], I[subset])\n y_hat <- subset[pi] // Possible predictions for any tau ≥ tau_min\n K <- len(y_hat)\n z <- [M(y[i], y_hat[k]) for k in 0..K-1] // Evaluate metric for each prediction\n delta <- diff(z)\n c <- p[i][y_hat[1:]]\n Z_0 <- Z_0 + z[0]\n C <- concat(C, c)\n Delta <- concat(Delta, delta)\n end for\n order <- argsort(C, 'desc') // Note: Can use priority queue for slightly faster merge\n C, Delta <- C[order], Delta[order]\n C <- concat([1], C, [min_tau])\n Delta <- concat([0], Delta)\n Z <- (1/N) * (Z_0 + cumsum(Delta))\n return C, Z\n```\nAbove, we consider a single, scalar metric `M(y, y_hat)`.\nIn practice, we simultaneously consider at least 2 metrics (e.g. recall, precision) by having `M(y, y_hat)` return a vector of metrics.\nThis makes the \"delta\" and \"z\" variables be vector sequences instead of scalar sequences.\n\n### Toy example\n\nAs an example, let's consider a basic hierarchy with 5 nodes:\n```\n 0\n / \\\n1 2\n / \\\n 3 4\n```\nwith a uniform prior on the leaves: `I(1) = I(3) = I(4) = -ln(1/3) ≈ 1.10`, `I(2) = -ln(2/3) ≈ 0.41`, `I(0) = -ln(1) = 0`.\n\nNow consider 2 examples (A and B), both with groundtruth `y = 3`, and predictions\n```\nA: p(0) = 1, p(1) = 0.1, p(2) = 0.8, p(3) = 0.6, p(4) = 0.1\nB: p(0) = 1, p(1) = 0.2, p(2) = 0.7, p(3) = 0.2, p(4) = 0.3\n```\nExample A has the correct leaf prediction, example B does not.\nThe Pareto sets for these examples (from `OrderedParetoSet`) are:\n```\nA: y_hat = [0, 2, 3], p(y_hat) = [1, 0.8, 0.6], I(y_hat) = [0, 0.41, 1.10]\nB: y_hat = [0, 2, 4], p(y_hat) = [1, 0.7, 0.3], I(y_hat) = [0, 0.41, 1.10]\n```\nThe `ConstructDatasetCurve` function evaluates the metric for all predictions, computes the metric deltas, merges the lists and takes the partial sum.\nUsing the \"Recall\" metric and `min_tau = 0`, we have:\n```\nA: z = [0, 0.41, 1.10]/1.10 = [0, 0.37, 1], delta = [0.37, 0.63], c = [0.8, 0.6]\nB: z = [0, 0.41, 0.41]/1.10 = [0, 0.37, 0.37], delta = [0.37, 0], c = [0.7, 0.3]\nZ_0 = 0\n// merge c's and deltas; sort descending by c\nC = [0.8, 0.7, 0.6, 0.3]\nDelta = [0.37, 0.37, 0.63, 0]\n// prepend/append initial/final value\nC = [1, 0.8, 0.7, 0.6, 0.3, 0]\nDelta = [0, 0.37, 0.37, 0.63, 0]\nZ = (1/2) * (Z_0 + cumsum(Delta)) = [0, 0.18, 0.37, 0.68, 0.68]\n```\nThis gives piecewise-constant recall: it increases from 0 (for 1 > tau ≥ 0.8) to 0.68 (for 0.6 > tau ≥ 0).",
" Thanks for the responses. Yes, please provide the two algorithmic routines (OrderedParetoSet and ConstructDatasetCurve). I have very low confidence about whether my imagination of these routines is the same as your actual process. If possible, please also provide a toy example (ex: a task with only 3 leaf classes and 10 or fewer data points) to demonstrate the outputs of key steps in the routine. A concrete example will significantly reduce our effort in the evaluation.",
" Thanks for your feedback.\nIt seems that the negative evaluation stems from the fact that we simply did not state the motivation and contributions clearly enough.\nWe believe it is possible to modify the text such that the significance of the work can be recognized.\n\n> \"There is no theoretical motivation or practical motivation for using 'Operating characteristic curve' … why is it better than the other metrics?\"\n\nFor any problem where prediction involves a trade-off, it is vital to evaluate methods at multiple operating points for 2 main reasons:\n1. Fair comparison.\nIf we compare two methods using one operating point each, it might not be an apples-to-apples comparison, and invalid conclusions could be drawn.\nFor example, in Figure 1, if we compared *soft-max-margin* and *flat softmax* using majority inference (τ = 0.5, triangle marker), we would conclude that *soft-max-margin* is more specific (x axis) and *flat softmax* is more correct (y axis).\nThe curves reveal this to be incorrect: *soft-max-margin* is more correct at the same specificity.\n2. Application-specific requirements.\nOperating curves enable practitioners to select methods based on individual requirements, such as minimum correctness rate.\n\nThis is why ROC and precision-recall curves are universal in binary classification and detection, respectively.\nWe will make this more clear in Introduction, paragraph 2.\nReviewer @CWNt said that \"this sounds a must-have for the evaluation\".\n\nWe emphasize that the operating curves in the paper *all use existing metrics*.\nThe key difference is the evaluation protocol, which considers the range of predictions obtained from a threshold-based inference rule, rather than a single prediction from a static inference rule.\n\n> \"what is the takeaway? What loss function should we use when there is a hierarchical classification problem?\"\n\nFrom Fig 1, it is trivial to determine that soft-max-margin and DeepRTC-softmax are the most effective methods, as they dominate the others.\nOur OOD experiment suggests DeepRTC-softmax to be more suitable for practical applications due to its robustness to unseen classes.\n\nHowever, the key takeaway is to always consider the trade-off when evaluating hierarchical classifiers, otherwise the wrong conclusion could be drawn.\n\n> \"It's difficult to find out what the authors actually did. … I suggest the authors summarize their contributions at the end of the introduction as a paragraph or bullet points.\"\n\nWe propose to include this list:\n\n* We introduce a novel, efficient technique for evaluating hierarchical classifiers that captures the full trade-off between specificity and correctness using a threshold-based inference rule.\nThis enables fairer comparison and better characterization of different methods.\n* We propose two novel loss functions, soft-max-descendant and soft-max-margin, as well as a simple modification of DeepRTC.\nWhile soft-max-descendant is ineffective, soft-max-margin and (modified) DeepRTC achieve the best results.\n* We conduct an empirical comparison of loss functions and inference rules using the iNat21-mini dataset and its hierarchy of 10,000 species.\nThe naive softmax loss outperforms other approaches, with the exception of DeepRTC and our soft-max-margin loss.\nThe threshold-based inference rule is shown to be highly effective.\n* The robustness of different methods to unseen leaf-node classes is evaluated, showing that top-down conditional approaches can have an advantage in this setting.\n\nWe feel that the contributions are significant and ask that you please re-evaluate the merits of the work.\n\n> Related work\n\nWe can remove citations for specialized architectures L64-67, and add a contextualizing sentence each to Hierarchical metrics and Structured prediction.\n\n> \"The authors spend so much text explaining previous methods and metrics without explaining why they should be there as they do not help in understanding the proposed loss functions and metrics.\"\n\nFor the paper to be self-contained and reproducible, we simply gave the definition of the inference rules and loss functions being compared in the main experiment, as well as the metrics for which we constructed operating curves.\n\n> Why include soft-max-descendant if it performs poorly?\n\nWe retained this negative result to save others the effort, since it seemed like a logical loss to use with this parameterization.\nIt can easily be removed or shifted to an appendix if the reviewers agree.\n\n> Embeddings for millions of classes? General approach?\n\nWith millions of classes, it's common to query an efficient index of class embeddings rather than exhaustively compare an example to every class with a dense layer (our \"general approach\").\nThis is unclear and we will re-phrase.\n\n> Logical modes of failure? Excessive trust?\n\nWe mean: Automation bias is more likely if a system makes less flagrant errors, and hierarchical classifiers make less flagrant errors because they predict superclasses when uncertain.",
" Thanks for your review.\nWe're glad that you appreciate the importance of evaluation using multiple operating points.\n\n> I am slightly surprised that … \"almost no prior work has compared methods at multiple operating points\" … If the above claim is valid, I believe this work provides a valuable evaluation of the problem and is a significant contribution. … I am not an expert on this topic and will cross-check with other reviewers for the claim\n\nWe were similarly surprised.\nIt seems that none of the reviewers have contested this claim.\n\n> To strengthen the significance … explain why evaluation at multiple operating points is not popular or feasible without the method in section 5\n\nWithout the method in Section 5, one must repeat the inference procedure (eq. 4) and evaluate the metrics for every threshold.\nIf we let Y denote the number of nodes in the hierarchy, then this takes at least O(Y) time.\nDoing this for a resolution of T different thresholds takes O(T Y) time.\nHowever, the Pareto set, which gives the predictions for *all* thresholds, can be identified in O(Y log Y) time.\nOur algorithm generates the curves with perfect (continuous) resolution and is faster by a factor of T / log Y.\nThe naive algorithm takes significantly longer to generate even moderate-resolution plots (e.g. 20 thresholds).\n\n> Does Eq-17 have any limitations? For example, how was its accuracy in the most popular classification setting that only predicts leaf classes?\n\nFor iNat21-mini, soft-max-margin (eq. 17) is slightly better than flat softmax classification for the classical task.\nThis can be read from Figure 1 (right): the operating curves include leaf inference as a special case (τ = 0, circle marker), and the \"Correct\" and \"Exact\" metrics reduce to classical accuracy when the predicted and ground-truth classes are leaf nodes.\n\nEq. 17 does have the limitation that it is necessary to choose the desired margin, and a poor choice may lead to worse results.\nFor example, in Figure 4 (Appendix A), it can be seen that setting the scale too large (α = 20) results in diminished accuracy.\n\n> elaborate more on why Eq-17 gives a result better than flat in Figure 1\n\nEq. 17 is more effective because, unlike the flat softmax loss, it demands greater separation (in terms of logits) of the true label from the incorrect classes than from the other correct classes.\nThis will be added to the text.\n\n> use an algorithmic block to summarize the process of section 5\n\nWe agree that this would improve the clarity of Section 5.\nWe have written algorithms for two subroutines, `OrderedParetoSet` and `ConstructDatasetCurve`, which will be added to the supplementary material.\nThese can be provided during the discussion if desired.\nSee also the response to reviewer @53HE concerning this section.",
" > Strengths: A certain novelty [in] loss functions and section 5, if correct\n\nThanks for considering the manuscript in detail and recognizing these aspects.\nWe believe the paper, especially Section 5, can be significantly improved given the feedback.\n\n> Are the p(y) for all methods probabilities? … That can be put in a table.\n\nNot all parameterizations give valid probabilities on the hierarchy: some satisfy eq. 1, some only guarantee that each child is less confident than its parent, and some do not even provide this.\nA table is a great idea.\nWe can either move the equations of Section 4.2 into this table to make space, or add it as an appendix.\n\n> If [the p(y) are probabilities], what is the set over which they sum up?\n\nIndeed, the probabilities of all classes will not sum to 1 since some classes are supersets of others (think: event space not sample space).\nWe instead say that a distribution is valid if p(y) ≥ 0 for all y, p(root) = 1 and p satisfies eq. 1.\n\nIn the case where eq. 1 holds with *equality*, the likelihoods of the leaf nodes sum to 1.\nHowever, we deliberately permit superclasses to be \"larger\" than the union of their children, as this may help generalize to unseen classes.\nStill, if p is valid, the \"exclusive\" probabilities (i.e. of a node and _not_ its children), \\\\(\\\\tilde{p}(u)=p(u)-\\\\sum\\_{v\\\\in \\\\mathcal{C}(u)}p(v)\\\\), will sum to 1 over all nodes.\n\n> How does computing the curves in Section 5 work?\n\nEach aspect is addressed below.\n\n> How to sort with two criteria?\n\nThe key here is that we find an ordering of _just_ the Pareto set (not the set of all nodes), and the Pareto set (for 2D vectors) can always be ordered such that one criterion is increasing and the other is decreasing.\n(Visually, imagine walking along the Pareto front: https://upload.wikimedia.org/wikipedia/commons/b/b7/Front_pareto.svg)\nThis is not self-evident and we will make it explicit.\n\nAnother way to understand this is: the inference rule (eq. 4) will only predict a node with *lower confidence* if it contains *greater information*.\n\nIn the example provided, if \\\\(p(\\\\hat{y}\\_k)\\\\ge p(\\\\hat{y}\\_{k+1})\\\\) and \\\\(I(\\\\hat{y}\\_k)\\\\ge I(\\\\hat{y}\\_{k+1})\\\\), then \\\\(\\\\hat{y}\\_{k+1}\\\\) would not be in the Pareto set from eq. 18, as it is dominated by \\\\(\\\\hat{y}\\_k\\\\).\n\n> What is varied in line 233 with z\\_k and the τ when considering the z\\_k? There is an unclear y in the definition of z\\_k which might be related to the τ.\n\nAt this point, we are considering a single example (x, y), so y is the GT label and is not related to τ.\nThe sequence z\\_k represents the value of the metric \\\\(M(y,\\\\hat{y})\\\\) for the sequence of predictions \\\\(\\hat{y}\\_k\\\\), hence the dependence on the GT label y.\n\nAs τ is varied (decreased), the predicted label will progress through the sequence \\\\(\\hat{y}\\_k\\\\), with the predictions becoming less confident and more informative.\n\nMaybe it's more clear if we define the prediction as a piecewise-constant function \\\\(\\\\hat{y}(τ)=\\\\hat{y}\\_{κ(τ)}\\\\) where \\\\(κ(τ)\\\\) finds the index k such that \\\\(c\\_k>τ\\\\ge c\\_{k+1}\\\\)?\nThen \\\\(M(y,\\\\hat{y}(τ)) = M(y,\\\\hat{y}\\_{κ(τ)})=z\\_{κ(τ)}\\\\).\n\n> How to merge the sequence of pairs δ into Δ? Why does it work?\n\nThe lists of (confidence, delta\\_metric) pairs for every example are merged to be ordered (descending) by confidence.\n\nThe algorithm works by a simple manipulation of the summations over (i) examples and (ii) deltas.\nWe propose to show this by adding an equation: (here, i is the index over examples)\n$$\\\\textstyle\\\\sum_i M(y^i,\\\\hat{y}^i(τ))=\\\\sum_i (z^i\\_0+\\\\sum\\_{k:c^i\\_k>τ}δ^i\\_k)=\\\\sum\\_i z^i\\_0+\\\\sum\\_{(i,k):c^i\\_k>τ}δ^i\\_k=Z\\_0+\\\\sum\\_{j:C\\_{j}>τ}\\\\Delta\\_j$$\n\nThus the metric for all τ can be obtained using a partial sum over \\\\(\\\\{δ^{i}\\_{k}\\\\}_{i, k}\\\\) sorted by \\\\(c^i\\_k\\\\).\n\n> Section 5 is hard to understand [and] reads as if one has to trust that it works.\n\nBesides the above changes, we will add algorithms to the appendix as suggested by reviewer @CWNt.\nThe code release will include unit tests for these subroutines.\n\n> What is the meaning of eq(17) given the specific cost function choice C? It seems to give a higher weight to nodes y' such that the prediction y is on the path from y'?\n\nSorry, there was an error here: it should be \\\\(C(y,y')=1-\\\\text{Correct}(y,y')\\\\).\nIt's not exactly a weighting; rather, the meaning of the loss is to demand greater separation (in logit values) of the true label from the incorrect classes than from the other correct classes.\nThis will be added to the text.\n\n> What is the meaning of eq(7) given the specific cost function choice C?\n\nSorry, there was an error here: it should be \\\\(C(y,y')=-\\\\text{Correct}(y,y')I(y')\\\\).\nThis seeks to predict the label y' that maximizes the expected correct information.\n\n> changes made to deepRTC need a discussion\n\nDeepRTC-softmax first obtains leaf likelihoods and adds these up (like flat softmax) to ensure a valid distribution.\nThis will be more clear using the table of methods.",
" This paper presented a new approach for evaluating hierarchical classifiers for predicting non-leaf nodes at multiple operating points. With the new evaluation metric, the paper shows that the widely used softmax classifier performs reasonably well in many cases, although a structured classifier might have an advantage in classifying unseen classes. The paper also proposes two new loss functions. One of them outperforms the softmax classifier in all examinations. The experiments are on ImangeNet and iNat classification tasks. The paper is well written and easy to follow. A large amount of literature is discussed and adequately summarized. The comparison between the paper's and previous methods is succinct but informative.\n\nRegarding significance, comparing hierarchical classifiers at multiple operating points sounds a must-have for the evaluation. Therefore I am slightly surprised that the paper mentioned that \"almost no prior work has compared methods at multiple operating points\" (in lines 5 and 6). If the above claim is valid, I believe this work provides a valuable evaluation of the problem and is a significant contribution. I like to admit that I am not an expert on this topic and will cross-check with other reviewers for the claim. To strengthen the significance, the author may explain why evaluation at multiple operating points is not popular or feasible without the method in section 5.\n\nTechnically, I do not find obvious mistakes, but the clarity of the key methods may have room to improve. For example, (1) elaborate more on why Eq-17 gives a result better than flat in Figure 1; (2) use an algorithmic block to summarize the process of section 5.\n\nRegarding the experiments, the paper uses large-scale datasets, has proper baselines and prior methods, and examines interesting aspects such as flat classifiers at different levels and unseen classes. The results provide conclusive insights and are easy to interpret. Does Eq-17 have any limitations? For example, how was its accuracy in the most popular classification setting that only predicts leaf classes? Lines 340-344 have the discussion and make sense to me.",
" The authors consider the tasks of hierarchical classification where predictions can be done at different levels of granularity, which can be seen in datasets like ImageNet, where a super class can be spiders and its fine-grained classes are the different types of spiders.\n\nThe authors describe and compare different existing loss functions on a new metric class based on \"Operating characteristic curve\" on the iNat21-Mini dataset. They also propose two loss functions: soft-max-descendant that scores internal nodes in the classification hierarchy, and soft-max-margin which is a soft version of the structured hinge loss.\n\nDiscussions have been made about how different loss functions behave based on the structure of the problem such as out-of-distribution and in-distribution. \nStrengths:\n\n- Hierarchical classification is very interesting to the research community.\n\n\nWeaknesses:\n\n- There is no theoretical motivation or practical motivation for using \"Operating characteristic curve\" as a metric for Hierarchical classification, why is it better than the other metrics in terms of reliability, correctness, etc.? For example, [26] proposed an improved version of the mistake-severity metric that \"could give a wrong impression of improvement while the model might just be making additional mistakes to fool the metric\" which is a good motivation for their work. That type of motivation is missing in this work.\n\n- The document is poorly written as it is difficult to find out what the authors actually did. The authors do not clearly present their contributions. I suggest the authors summarize their contributions at the end of the introduction as a paragraph or bullet points.\n\n- The related work is poorly written, there is a large list of existing methods described but the authors do not show how relevant they are to the proposed metric/loss functions. It is important that by the end of related work paragraphs to describe what the paper proposes in relation to existing methods.\n\n- The authors spend so much text explaining previous methods and metrics without explaining why they should be there as they do not help in understanding the proposed loss functions and metrics. A simple citation and few sentence explanations of these methods would be sufficient. This amount of detail just adds confusion about finding out what the authors are actually trying to propose.\n\n- The proposed soft-max-descendant performs poorly in so many of the settings so it is not clear what the motivation is for including it in this work. Can the authors explain the main contributions and motivation of this work? It is not clear why having \"Operating characteristic curve\" is important and why having the two loss functions \"soft-max-descendant\" and \"soft-max-margin\" are important as their performance is poor in many settings.\n\nA lot of loss functions have been compared against a new metric, what is the takeaway? What loss function should we use when there is a hierarchical classification problem, and why is the proposed metric a more reliable approach than others? The authors state that \"One limitation of our general approach is that it may not easily scale to millions of classes, where embeddings can be more suitable.\" It is not clear why that is the case, why embeddings are more suitable, and what \"general approach\" really means. Please clarify. \n\nThe societal impact is poorly written, can you clarify what this quote means? \"While hierarchical classifiers may mitigate some of the risks of misclassification due to their more logical modes of failure, this may enable more widespread use of automatic classification or lead human users to place excessive trust in the system.\" It is not clear to me what \"logical modes of failure\" mean, and I don't see why humans would place excessive trust in the system.",
" Topic: the paper reviews the performance of hierarchical classification for deep learning classifiers.\n\nThey propose as scoring for internal nodes to sum up the softmax components of the leaves.\nThey propose for the above scoring two loss functions. \nThey propose in section 5 a way to compute curves over varying thresholds \\tau for node probabilities p(y), where the graphs show correct versus exact and precision vs recall, with the note that these measures are defined in the sense of hierarchical measures and not as conventional precision vs recall for flat classifiers.\n\nThey evaluate these curves for a number of methods.\nThey run an experiment with simulated unseen classes which are part of the hierarchy.\n\nThe topic is relevant.\nFor a better score, section 5 needs to be verifiable and better understandable.\n \nStrengths:\nA certain novelty, with combinations of loss functions and section 5, if correct.\nIt is an overview evaluation for a number of methods.\nThe topic is interesting and relevant.\n\nWeaknesses:\n\nIt has a not so great readability.\nSome parts appear to be too brief, and it appears as if it would profit from an extended journal version.\n\n-the meaning of the choice of C(y,\\hat{y}) in line 181 is a bit unclear. One seems to sum I(y) values from ancestor nodes starting at the prediction \\hat{y}.\n-eq (7) needs a discussion\n-the changes made to deepRTC need a discussion.\n-eq (17) needs a discussion with respect to the concrete C term used.\n\n-Section 5 is hard to understand. \n\nThe ordering is a double ordering, with two criteria. How to sort with two criteria?\n\np(\\hat{y}_{k}) \\ge p(\\hat{y}_{k+1}) and I(\\hat{y}_{k}) \\le I(\\hat{y}_{k+1}). What to do if p(\\hat{y}_{k}) \\ge p(\\hat{y}_{k+1}) and I(\\hat{y}_{k}) \\ge I(\\hat{y}_{k+1}) for example ?\n\nThere is a \\Delta seemingly out of nowhere. How to merge the sequence of pairs ? Why does it work? \nThere is an unclear y in the definition of z_k which might be related to the \\tau.\n\nSection 5 reads as if one has to trust that it works as described. \n\n-a smaller issue: it could be worth to cite some pre-deep learning hierarchical papers (but that causes no reduction in scoring) like:\n\nBlaschko, Zaremba, Gretton Taxonomic Prediction with Tree-Structured Covariances\nZweig, Weinshall, Exploiting Object Hierarchy: Combining Models from Different Category Levels\nMarszalek, Schmid, Semantic Hierarchies for Visual Object Recognition\nand the like Questions: \n\nAre the p(y) for all methods probabilities? if yes, what is the set over which they sum up? That can be put in a table.\n\n\nHow does computing the curves in Section 5 work?\nThe ordering is a double ordering, with two criteria. How to sort with two criteria?\nWhat is varied in line 233 with z_k and the \\tau when considering the z_k? There is an unclear y in the definition of z_k which might be related to the \\tau.\nHow to merge the sequence of pairs \\delta into \\Delta?\n\nWhat is the meaning of eq(7) given the specific cost function choice C?\nWhat is the meaning of eq(17) given the specific cost function choice C? It seems to give a higher weight to nodes y' such that the prediction y is on the path from y' ? \n\n\n Some discussion has been made. No issue regarding potential negative societal impact."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"DhV-tDMmUT6",
"nips_2022_mNtFhoNRr4i",
"DxAvIICF8kG",
"xSWlXLgU0D8",
"P3H-oJDlYsr",
"-jBTV0Z59ma",
"__g2hw7B1YG",
"lUOunTrhFiE",
"nips_2022_mNtFhoNRr4i",
"nips_2022_mNtFhoNRr4i",
"nips_2022_mNtFhoNRr4i"
] |
nips_2022_YR-s5leIvh | CLEAR: Generative Counterfactual Explanations on Graphs | Counterfactual explanations promote explainability in machine learning models by answering the question “how should the input instance be altered to obtain a desired predicted label?". The comparison of this instance before and after perturbation can enhance human interpretation. Most existing studies on counterfactual explanations are limited in tabular data or image data. In this paper, we study the problem of counterfactual explanation generation on graphs. A few studies have explored to generate counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed: 1) optimizing in the discrete and disorganized space of graphs; 2) generalizing on unseen graphs; 3) maintaining the causality in the generated counterfactuals without prior knowledge of the causal model. To tackle these challenges, we propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models. Specifically, CLEAR leverages a graph variational autoencoder based mechanism to facilitate its optimization and generalization, and promotes causality by leveraging an auxiliary variable to better identify the causal model. Extensive experiments on both synthetic and real-world graphs validate the superiority of CLEAR over state-of-the-art counterfactual explanation methods on graphs in different aspects.
| Accept | This paper proposes a new method for producing counterfactuals on graphs. This is performed using a VAE on graphs with auxiliary variables to identify independent components and promote causality. While this work is mainly a combination of existing ideas, the resulting method is not trivial.
The engaged discussion clarified most of the concerns except a remaining concern around the diversity of the explanations. The reviewer was encouraging to measure (or optimize for) the diversity of explanation. That is, explanations that are significantly different (e.g. orthogonal from each other) in latent space. This is not a ground for rejection but it could improve this work and we encourage the authors to add this feature.
I recommend acceptance of this paper.
| train | [
"lS9S5Z5czXZ",
"pi9Eqq0mu_P",
"yYgzkNeMVI",
"yYxCvdKA818",
"7FaANxkZaE",
"sgE_KaI38py",
"X5hmKSqXymj",
"bgIHgLnEop",
"IxYeXt1WUS",
"iu-mTMB-3le",
"7O4K674rpVp",
"m5TJkBfDPu",
"3KmyohAtGW",
"arsTcYUymKh",
"ez_cHc3Cu_K",
"qBdQWczCBfl",
"qe2t1c6rW1",
"p3baE5u-zGw",
"LTOqVZE7qJD"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Yes. The graph structure of the counterfactuals is different. Thank you for this suggestion, we will add this clarification in the paper.",
" When you say \"not the same\" you mean that the discrete graph structure is different? If so, that is encouraging, you might want to clarify this in the paper.",
" Thank you for your additional feedback! Although encouraging diversity is not our main focus in this paper, the probability of generating the same counterfactual from different times of sampling is very low due to the complexity of graph structure. In our current experiments, over 99% counterfactuals for the same graph are not the same. Besides, it is very easy to explicitly involve further constraints such as diversity in our framework, depending on the application scenario.\n\nPlease let us know if you have further comments or concerns, thanks!",
" Your responses clarify most of my doubts. The only part I am still concerned about is that sampling does not guarantee that your model does produce multiple counterfactuals that are different from each other and thus, it is not clear if the evaluation can be trusted.",
" Thank you for your recognition! If you would like to increase the score, we appreciate it, and this is just a gentle reminder that the score seems unchanged yet.",
" Thanks for your additional feedback. For this concern, we clarify that: 1) counterfactual explanations promote model interpretation mainly through the comparison between the original graph and its counterfactuals, i.e., the perturbations which need to be made to achieve the desired prediction. 2) Proximity measures the similarity between the original graph and its counterfactuals. Intuitively, higher proximity indicates smaller perturbation, which leads the interpretation to be more compact.\n\nPlease let us know if you have further questions or concerns, thanks!",
" Thanks for the response. However, one of my concerns is not addressed. Why the similarity metric, i.e., proximity can measure compactness for interpretability? From equation (7), I do not think proximity can measure interpretability.",
" Thank you for the detailed response. I am happy to increase my score.",
" Dear Reviewer SccR,\n\nThank you again for your valued comments! We have responded to your initial questions, and we are looking forward to your further feedback. We will be happy to answer any further questions you may have.\n\nThank you.\n\nAuthors",
" Dear Reviewer Htsg,\n\nThank you again for your valued comments! We have responded to your initial questions, and we are looking forward to your further feedback. We will be happy to answer any further questions you may have.\n\nThank you.\n\nAuthors",
" Dear Reviewer xuuN,\n\nThank you again for your valued comments! We have responded to your initial questions, and we are looking forward to your further feedback. We will be happy to answer any further questions you may have.\n\nThank you.\n\nAuthors",
" ### Q5: The use of GNNExplainer as a baseline is unclear. Are they assigning the same label to all nodes and then generating explanations for a given node? Or are they aggregating all node explanations to generate a graph-level explanation?\n\nWe suppose the reviewer refers to CF-GNNExplainer. The original CF-GNNExplainer focuses on node classification, the input is a specific node’s surrounding subgraph, and the output is a perturbed subgraph to change the prediction of this node. We adapt CF-GNNExplainer for graph classification by taking the whole graph (instead of the subgraph of any specific node) as input, and optimizing the model until the graph classification (instead of node classification) label has been changed to the desired one. In this way, we are not optimizing for a specific node, and we do not need aggregation over all node explanations. \nWe’ve revised the baseline description for better clarification in the new version (highlighted in blue).\n\n### Q6: It would be great if the authors can motivate using causality metric for comparison. It feels that the metric is biased towards the proposed framework as the auxiliary variable can provide additional information to identify the exogenous variables in the structural causal model, which is captured by the CLEAR-VAE training process.\n\nAs far as we know, this is the first work which incorporates causality into counterfactual generation on graphs. We use the similar causality metrics in previous work of counterfactual explanation on i.i.d. data [3].\n\n[1] Numeroso, Danilo, and Davide Bacciu. \"Meg: Generating molecular counterfactual explanations for deep graph networks.\" 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021.\n\n[2] Bajaj, Mohit, et al. \"Robust counterfactual explanations on graph neural networks.\" Advances in Neural Information Processing Systems 34 (2021): 5644-5655.\n\n[3] Mahajan, Divyat, Chenhao Tan, and Amit Sharma. \"Preserving causal constraints in counterfactual explanations for machine learning classifiers.\" arXiv preprint arXiv:1912.03277 (2019).\n",
" Thank you for the comments. We offer the clarifications below to solve your concerns.\n\n### Q1: The perturbation process for generating similar graphs using very small perturbation is unclear, i.e., how to generate perturbed instances that are not out-of-distribution samples.\n\nThe perturbation mechanism in our method is inferred by the proposed graph autoencoder in CLEAR, which is trained to optimize the objective (Eq. (2)). The loss function in Eq. (2) enforces CLEAR to learn a way to perturb the input graph and generate counterfactuals which 1) are similar to the input graph (enforced by the term $d(G, G^{CF})$ in Eq. (2), and 2) achieve the desired prediction (enforced by the term $l(f(G^{CF}), Y^∗)$). In our framework, the VAE backbone is optimized based on the evidence lower bound (ELBO), which is equivalent to maximizing a lower bound of the log likelihood of training data. This helps prevent generating out-of-distribution graphs.\n\n### Q2: It is unclear from Section 4.7 and Appendix C.2 how the generated counterfactuals using CLEAR promote model explainability. \n\nVisualizing the perturbations is a commonly-used way for showing the ability of promoting model explanability for counterfactual explanations [1], so we also follow this way in Section 4.7 and Appendix C.2. The results show that CLEAR can promote model explainability in two aspects: 1) CLEAR does find the decision boundary of the prediction model, and thus can explain how we can make perturbations to achieve a desired prediction; 2) the generated counterfactuals are consistent with the underlying causal model, therefore, CLEAR can provide more realistic perturbations for human interpretation.\n\nBefore these results, the superiority of our method over existing methods in graph counterfactual explanation has already been validated in Table 1 by comparison in different metrics (e.g., validity, similarity, causality, etc.), so the good performance of CLEAR in these metrics has shown its advantage in promoting model explainability. \n\n### Q3: The proposed CLEAR framework uses a generative backbone CLEAR-VAE and claims that it can generate counterfactual explanations for unseen graphs. However, it should still require graphs to belong to the same distribution as the training data.\n\nWe should clarify that the word “generalization” in this paper is different from “domain generalization” for graphs from different distributions. In this paper, the point is that our method can be directly applied for unseen graphs in an inductive way without retraining the model, while most existing graph CFE methods either are based on enumeration, or need to be trained separately for each input graph.\n\n### Q4: The paper details the problem of optimizing counterfactual explanation on graphs due to its discrete nature but then follows the optimization trick used by previous works for generating a counterfactual adjacency matrix.\n\nThe discrete nature of graph structure brings difficulty for optimization, and thus many existing methods of counterfactual explanation on graphs are designed in an enumeration way. To circumvent this challenge, we allow gradient-based counterfactual explanation on graphs by 1) using a graph neural network (GNN) to handle the input graph, and 2) generating a continuous adjacency matrix and then mapping it to discrete values. \n\nAs far as we know, our work is the first gradient-based optimization method for counterfactual explanation on graphs without additional assumption of the prediction model and application domain.\nAlthough gradient-based optimization has been employed in previous work of graph generation, in the area of counterfactual explanation on graphs, most existing methods are still in an enumeration way. \n\nFor those few graph CFE methods which enable gradient-based optimization, most of them rely on specific domain knowledge [1] (e.g., chemical rules) or assumptions [2] about the prediction model (e.g., the prediction model is a graph neural network (GNN) model and its gradients or representations are accessible) to prune the search space or facilitate the optimization. However, these domain knowledge and assumptions limit their application in different scenarios, while our proposed method can optimize without these knowledge or assumptions.\n",
" ### Q5: Originality of optimization: Many works have employed various models to generate graphs by gradient-based optimization instead of enumeration.\n\nAs far as we know, our work is the first gradient-based optimization method for counterfactual explanation on graphs without additional assumption of the prediction model and application domain.\nAlthough gradient-based optimization has been employed in previous work of graph generation, in the area of counterfactual explanation on graphs, most existing methods are still in an enumeration way. GNNExplainer and the other two papers the reviewer mentioned are not counterfactual explanation methods (although some of them might be adapted for counterfactual explanation). \n\nFor those few graph CFE methods which enable gradient-based optimization, most of them rely on specific domain knowledge [3] (e.g., chemical rules) or assumptions [4] about the prediction model (e.g., the prediction model is a graph neural network (GNN) model and its gradients or representations are accessible) to prune the search space or facilitate the optimization. However, these domain knowledge and assumptions limit their application in different scenarios, while our proposed method can optimize without these knowledge or assumptions.\n\n\n### Q6: The grant application example for describing causality in counterfactual explanations is confusing.\n\nIn the real world, the causal model of grant application may be more complicated than the example we show. But the grant application process is just a hypothetical example to explain our motivation, so we choose a simple causal model for easier understanding. \n\nIn this example, we do not assume the form of structural equations and data distributions in the causal model, so we did not fix it in mathematical form. In the example, the main point we try to convey is that, in order to achieve a desired prediction (e.g., grant approval), the counterfactual generator needs to perturb the original graph (e.g., increase the number of collaborations), but the perturbation needs to be consistent with the underlying causal relations between variables (e.g., more collaborations bring better team culture), otherwise, the counterfactual explanations might be unrealistic or even meaningless for human interpretation [5].\n\n[1] Khemakhem, Ilyes, et al. \"Variational autoencoders and nonlinear ica: A unifying framework.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2020.\n\n[2] Liu, Ninghao, et al. \"Adversarial attacks and defenses: An interpretation perspective.\" ACM SIGKDD Explorations Newsletter 23.1 (2021): 86-99.\n\n[3] Numeroso, Danilo, and Davide Bacciu. \"Meg: Generating molecular counterfactual explanations for deep graph networks.\" 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021.\n\n[4] Bajaj, Mohit, et al. \"Robust counterfactual explanations on graph neural networks.\" Advances in Neural Information Processing Systems 34 (2021): 5644-5655.\n\n[5] Mahajan, Divyat, Chenhao Tan, and Amit Sharma. \"Preserving causal constraints in counterfactual explanations for machine learning classifiers.\" arXiv preprint arXiv:1912.03277 (2019).\n",
" \nThank you for carefully reviewing our paper. We offer the following clarifications for your concerns:\n\n### Q1: The merits of ICA over the existing methods in identifying causality should be highlighted.\n\nAs far as we know, this is the first work which incorporates causality into counterfactual explanation on graphs. And the reason we take advantage of nonlinear ICA for better identifying causality is the natural connection between them [1]. The setting of nonlinear ICA and causality learning is essentially quite similar. Nonlinear ICA assumes that the observed data $X$ is generated by certain transformations on latent variables $Z$, and aims to identify $Z$ and the transformations. Similarly, in a causal model, the observed data is also generated based on a set of exogenous variables and structural equations. Considering this, we utilize the ICA techniques to promote causality in this paper. \n\nWe’ve also carefully read the papers the reviewer mentioned (references [1,2] in the reviewer’s comments). Although these works also consider identifying causality, they are not targeting counterfactual explanation problems. These works focus more on identifying causal representations for model-based interpretation or domain adaptation, while in our problem we aim to generate counterfactual explanations which are compatible with the original structural causal model (SCM). The goals and scenarios of these problems are different. \n\n### Q2: One important metric for explanations is human interpretability requiring the generated explanations should be compact.\n\nThanks for this suggestion. Even though we did not explicitly evaluate the compactness of the explanation, in counterfactual explanation, we can show whether the explanations are compact through the proximity metric (i.e., the similarity between each original graph and its counterfactuals). Essentially, counterfactual explanation requires the counterfactuals to achieve a desired label with smallest perturbation on the original graph. In general, the higher the metric proximity is, the more compact, or sparse the perturbations should be.\n\n### Q3: Counterfactual explanations can be used to defend against attackers. The authors are encouraged to conduct experiments on graphs generated by the adversarial attacks to verify the effectiveness of counterfactual explanations.\n\nYes, counterfactual explanations can be used to defend against attackers [2]. Although it is not the main focus of this paper, It would be an interesting future work to further investigate their connection. \n\n### Q4: The experiments can not well support the claimed contribution in generalization. \n\nWe should clarify that the word “generalization” in this paper is different from “domain generalization” for graphs from different distributions. In this paper, the point is that our method can be directly applied for unseen graphs in an inductive way without retraining the model, while most existing graph CFE methods either are based on enumeration, or need to be trained separately for each input graph.\n\nIn our experiments, we should clarify that the generalization ability has been implicitly validated. In Table 1, we compare the time cost of our method and baselines, as our method can be directly used for unseen graphs without retraining. Our method outperforms those baselines which need to be separately trained for unseen graphs (please refer to Line 297-302).\n",
" Thank you for carefully reviewing our paper and these comments. We offer the clarifications below to solve your concerns.\n\n### Q1: Motivating the problem in general rather than giving more examples in abstract and introduction . \n\nThank you for this suggestion. We’ve added more general motivation in abstract and introduction in the new version (highlighted in blue). \n\n“Generally, CFE promotes human interpretation through the comparison between the explainee instance $X$ with predicted label $Y$ and its counterfactual $X^′$ with predicted label $Y' \\neq Y$. With its intuitive nature, CFEs can be deployed in various scenarios such as loan application and legal framework [1]. Different from traditional CFE studies on tabular or image data, recently, CFE on graphs is also an emerging field with applications in many graph structure related domains such as molecular analysis and career networking.”\n\n### Q2: In the ablation study there is no clear difference between VAE and CLEAR. \n\nCompared with CLEAR-VAE, CLEAR improves the causality score. For other metrics, they are not expected to have a clear difference, because CLEAR differs from CLEAR-VAE in its ability of considering causality.\n\n### Q3: Could you verify whether your model is or not producing the same counterfactual 3 times? \n\nThanks for this helpful comment. Although we did not directly encourage diversity in this work, the sampling process in our VAE-based counterfactual explanation generator decreases the probability of generating counterfactuals which are exactly the same. Actually, in our experiments, we’ve never observed counterfactuals which are exactly the same when $N^{CF}=3$. But it would also be an interesting future work to explicitly consider diversity into counterfactual explanations on graphs.\n\n### Q4: Could you clarify what causality you want to preserve? \n\nWe clarify that the causality in this paper means that we aim to generate counterfactual graphs which are compatible with the original structural causal model (SCM).\n\n### Q5: Why did you skip 0.1 in the hyperparameter sweep?\n\nWe have updated the results of 0.1 in Fig. 6 of the new version. The main observation of hyperparameter study is that the performance of validity and proximity of CLEAR is generally good unless $\\alpha$ and $\\beta$ are too unbalanced. This observation also holds when we set $\\alpha$ or $\\beta$ as 0.1. \n\n### Q6: Typo in references to Figure 8 \n\nThank you for carefully reading our manuscript, we have revised it in the new version.\n\n[1] Verma, Sahil, John Dickerson, and Keegan Hines. \"Counterfactual explanations for machine learning: A review.\" arXiv preprint arXiv:2010.10596 (2020).\n",
" The paper presents a framework CLEAR to generate counterfactual explanations on graphs. This framework can be helpful for promoting explainability in graph-based prediction models. Specifically, the authors use a graph variational autoencoder to generate the counterfactual graph and employ independent component analysis (ICA) to find causal relations. The generative counterfactual graphs have generalization ability and causality. Experiments show that CLEAR achieves promising performance in various evaluation metrics. Originality: The paper employs two existing technologies, i.e., graph variational auto-encoder and ICA to generate the counterfactual graph. Although the two components are not new, the topic is novel. One of the contributions, optimization, is not original. Many works have employed various models to generate graphs by gradient-based optimization instead of enumeration, such as GNNExplainer, [1], and [2]. Existing works have proposed many methods, such as information flow [1] and Markov blanket [2], to identify causality among latent variables. What’s the merit of ICA over these existing methods? \n\nQuality: Technically sound with well-supported claims. However, the following issues are suggested to be further considered: 1. The evaluation metrics are not sufficient. One important metric for explanations is human interpretability requiring the generated explanations should be compact. The authors should investigate human interpretability. 2. In addition, counterfactual explanations may be used to defend against attackers. The authors are encouraged to conduct experiments on graphs generated by the adversarial attacks to verify the effectiveness of generated counterfactual explanations. 3. The work lacks the evaluation of generalization on the unseen graph.\n\nClarity: Overall, the paper is well written and easy to understand. However, the following issues need to be clarified. 1. The motivation behind employing ICA to find causality is not clear. 2. Why the team culture is not the cause of grant application in the example of the Preliminaries Section? 3. The grant application example for describing causality in counterfactual explanations is confusing. It is advisable to describe it in mathematical form.\n\nSignificance: The method proposed in this paper would be helpful to give counterfactual explanations and promote explainability in graph-based data.\n\n[1] Lin, Wanyu, et al. \"Orphicx: A causality-inspired latent variable model for interpreting graph neural networks.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[2] Yang, Shuai, et al. \"Learning causal representations for robust domain adaptation.\" IEEE Transactions on Knowledge and Data Engineering, 2021.\n\n 1. The merits of ICA over the existing methods in identifying causality should be highlighted.\n2. One important metric for explanations is human interpretability requiring the generated explanations should be compact. The authors should investigate human interpretability.\n3. Counterfactual explanations can be used to defend against attackers. The authors are encouraged to conduct experiments on graphs generated by the adversarial attacks to verify the effectiveness of generated counterfactual explanations.\n4. The experiments can not well support the claimed contribution in generalization. How to evaluate the generalization ability on unseen graphs? The work lacks the evaluation of generalization on unseen graphs.\n There are no potential negative societal impacts.",
" In this work, the authors study the problem of generating counterfactual explanations for graphs using Graph Neural Networks (GNN). In contrast to some existing studies, the work lists three unaddressed counterfactual properties, i.e., i) discrete optimization of graphs, ii) generalization of counterfactuals on unseen graphs, and iii) ensuring causality in the generated counterfactuals without prior knowledge of the causal model. The work leverages a graph variational autoencoder-based framework to propose CLEAR (generative CounterfactuaL ExplAnation geneRator for graphs) that ensures the optimization and generalization of discrete counterfactual explanations for graphs and enforces causality by using an auxiliary variable for estimating the underlying causal model better. Strengths:\n1. The paper nicely enumerates three important desiderata for counterfactual explanation generators for graphs and describes their utilities with respect to counterfactual explanations.\n2. The formulation of the objective for the CLEAR framework is clearly explained with proper derivations and descriptions of the individual components.\n3. Extensive experiments with both synthetic and real-world graph datasets highlight the effectiveness of the CLEAR framework as compared to the baselines and show the utility of its components.\n\nWeaknesses and Questions:\n1. One of the major drawbacks of the work is that they don't detail the perturbation mechanism used for generating counterfactuals. They mention that: \"CLEAR aims to generate counterfactuals with slight perturbations on the explainee graph to elicit a desired predicted label\" but does not describe the perturbation process. This is crucial for understanding the framework as one of the main challenges in evaluating the reliability of generated counterfactual explanations is efficiently perturbing the input data. The perturbation process for generating similar graphs using very small perturbation is unclear, i.e., how to generate perturbed instances that are not out-of-distribution samples. \n\n2. It is unclear from Section 4.7 and Appendix C.2 how the generated counterfactuals using CLEAR promote model explainability. The observation that the framework makes correct perturbation to achieve the target label by visualizing the degrees across the decision boundary is intuitive and shown in multiple previous works.\n\n3. The proposed CLEAR framework uses a generative backbone CLEAR-VAE and claims that it can generate counterfactual explanations for unseen graphs. However, it should still require graphs to belong to the same distribution as the training data.\n\n4. The paper details the problem of optimizing counterfactual explanation on graphs due to its discrete nature but then follows the optimization trick used by previous works for generating a counterfactual adjacency matrix.\n\n5. The use of GNNExplainer as a baseline is unclear. The author assigns the given graph label to all the nodes inside the graph. It would be great if the authors explain this a bit more. Are they assigning the same label to all nodes and then generating explanations for a given node? Or are they aggregating all node explanations to generate a graph-level explanation?\n\n6. It would be great if the authors can motivate using causality metric for comparison. It feels that the metric is biased towards the proposed framework as the auxiliary variable can provide additional information to identify the exogenous variables in the structural causal model, which is captured by the CLEAR-VAE training process. Please refer to the Strengths and Weaknesses section for questions. Yes.",
" The authors present a counterfactual generation method for graphs. The proposed method (CLEAR) aims to perturb a given graph in a way that is still close to the original while changing a classifier prediction and respecting the causality in the data generating process. This is achieved by training a graph VAE conditioned on the target label and an auxiliary variable $S$ used by the original data generating process (an SCM), which helps to identify the causal model. The overall loss encourages counterfactuals to be close (wrt a distance metric) to the original graph while achieving the desired label ($Y^*$) and inferring the correct latent structure by keeping the latent codes $Z$ close to a distribution that is conditioned on $S$ and $Y^*$. In experiments they show that the proposed method achieves a higher number of valid counterfactuals that are more proximal while keeping causal relations than previous state of the art. They also provide an ablation study on the different parts of their model. Overall Review\n===========\nOverall, this paper tackles an important problem, the method is sound, and the results are encouraging. Some points could be improved, such as motivating the need of graph counterfactuals, or checking whether the model is repeating the same counterfactual three times. Therefore I recommend \"Weak Accept\" and I would be happy to raise the score based on the rebuttal and the other reviews.\n\nStrengths\n=======\n* The problem of understanding the behavior of ML systems is an important one.\n* The proposed method is sound and it performs better compared to previous state of the art.\n* The authors provide the code, ablations, and information for reproducibility.\n\nWeaknesses\n=========\n* This work focuses strongly on the method and comparison with previous state of the art but it lacks a bit of perspective on what is the final purpose of this research. Some more motivation in the introduction and some qualitative results showing interesting findings on a real graph would make this work more appealing. I like the example in Section 2, so you can focus more on motivating the problem in general rather than giving more examples in the introduction.\n* In the ablation study there is no clear difference between VAE and CLEAR. (The difference is, however, more clear on Figure 4).\n* Since the proposed method predicts 3 counterfactuals, it could just predict three times the same (or very similar) graph. This would improve the validity metric while not adding any additional value to the user. Several recent works focus on providing a set of diverse [28, A, B] and non-trivial explanations [C] (unexpected failure modes of the model), that are more useful for the end user. In fact, in Appendix D, the authors acknowledge that optimizing for diversity is a current limitation, but it would be interesting to study whether the VAE is repeating the same graph or producing different graphs. \n\n[28] Mothilal, Ramaravind K., Amit Sharma, and Chenhao Tan. \"Explaining machine learning classifiers through diverse counterfactual explanations.\" Proceedings of the 2020 conference on fairness, accountability, and transparency. 2020.\n\n[A] Smyth, Barry, and Mark T. Keane. \"A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations.\" arXiv preprint arXiv:2101.09056 (2021).\n\n[B] Hvilshøj, Frederik, Alexandros Iosifidis, and Ira Assent. \"On Quantitative Evaluations of Counterfactuals.\" arXiv preprint arXiv:2111.00177 (2021).\n\n[C] Rodríguez, Pau, et al. \"Beyond trivial counterfactual explanations with diverse valuable explanations.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n\n\n\nDetailed comments\n===============\nOriginality\n-----------\nThe proposed framework, and particularly the focus on causality preservation when generating counterfactuals, are novel to the best of my knowledge.\n\nQuality\n--------\nThe technical and written quality are good, assumptions are clearly stated, and proofs are provided in the Appendix.\n\nClarity\n-------\nOverall, the text is well-written and easy to follow. In section 4.7 I believe that references to Figure 8 should be to Figure 4. In Section 4.8 it is not clear why you do not test $0.1$. Many times you talk about preserving causality and, without any context, it is not clear if you mean that you are reconstructing a causal graph (with directed edges) and you want to preserve edge directions, or if you want the generated graphs to be compatible with the original SCM. After some time reading it becomes clear but it would be better if you made it more clear since the beginning. \n\nSignificance\n--------------\nGiven the increasing impact of machine learning applications in our lives, it is important to better understand how models make their predictions. However, the text does not fully transmit the significance of this work. \n\nReproducibility\n-----------------\nThe authors provide code, equations, and the necessary assumptions to reproduce their results. * Could you better motivate the counterfactual explanation problem in the introduction and the abstract? The idea is that the reader gets the \"big picture\" before you focus on the concrete problem of improving graph counterfactual methods.\n* Could you verify whether your model is or not producing the same counterfactual 3 times? (if it does it is ok, but I think this is important information for the reader to have)\n* Could you clarify what causality you want to preserve? (see clarity above)\n* Why did you skip 0.1 in the hyperparameter sweep? (see above) Yes, the authors include a section in the appendix."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
"pi9Eqq0mu_P",
"yYgzkNeMVI",
"yYxCvdKA818",
"IxYeXt1WUS",
"bgIHgLnEop",
"X5hmKSqXymj",
"7O4K674rpVp",
"iu-mTMB-3le",
"qBdQWczCBfl",
"m5TJkBfDPu",
"arsTcYUymKh",
"3KmyohAtGW",
"p3baE5u-zGw",
"ez_cHc3Cu_K",
"qe2t1c6rW1",
"LTOqVZE7qJD",
"nips_2022_YR-s5leIvh",
"nips_2022_YR-s5leIvh",
"nips_2022_YR-s5leIvh"
] |
nips_2022_MHjxpvMzf2x | Symmetry Teleportation for Accelerated Optimization | Existing gradient-based optimization methods update parameters locally, in a direction that minimizes the loss function. We study a different approach, symmetry teleportation, that allows parameters to travel a large distance on the loss level set, in order to improve the convergence speed in subsequent steps. Teleportation exploits symmetries in the loss landscape of optimization problems. We derive loss-invariant group actions for test functions in optimization and multi-layer neural networks, and prove a necessary condition for teleportation to improve convergence rate. We also show that our algorithm is closely related to second order methods. Experimentally, we show that teleportation improves the convergence speed of gradient descent and AdaGrad for several optimization problems including test functions, multi-layer regressions, and MNIST classification. | Accept | This paper proposes a novel, symmetry teleportation approach to optimize the parameters of ML models.
The proposed approach allows iterates to move along the loss level set and improves the convergence speed. The teleportations also exploit the symmetries that are present in the optimization problem.
The paper also includes very encouraging numerical experiments.
I believe that the paper brings more insides and techniques that have been mostly overlooked in the community when training ML problems. | test | [
"NOxhvLXzKiz",
"qM8-KTuGbNq",
"-sA4iLaouQ",
"bYGhlDXfMGa",
"0Y2KKVt4TI",
"yWmXYE3PrE2",
"XohTGspg1LGp",
"1jYqH_cuwiF",
"oHu4Y0LHYxW",
"Iwvam7EWo8X",
"vP0Rbsb_NrV",
"Im0m0q_n0UP",
"E2xLfNU9zx",
"GoHhYeMyvHL",
"uDdwTYuLwwq",
"QrmutSIE2EB",
"YQQkHBaa9Bz",
"tRMSpyjeJzS",
"RbJ68Cxazp-"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for the clarification. Now, it makes sense to me and I have raised the score. ",
" Thanks for reading our responses and proofs! We really appreciate it.\n\nWe have updated the paper, where we added a new section Appendix C.5. We have formalized the improvement brought by teleportation in SGD and added some discussion about the gap between teleporting using mini-batches vs. all data. Due to time constraint, we are not able to finish the proof that gives a high probability bound. However, we would be happy to add more formal results in the final version.",
" Thanks for addressing many questions in my original review. I have carefully read your responses. You have added a theoretical analysis of a class of optimization problems (quadratic convex functions), which is solid. That is great. Thanks for your efforts.\n\nWe agree that the expected change in loss and convergence is related to the variance in the data as well as minibatch sizes. Empirical results have demonstrated that teleporting using a small number of data points (e. g.80 images in MNIST) is able to improve the convergence. It would be better if there could be a theoretical analysis, especially on the loss landscape.\n",
" Dear reviewers and AC,\n\nWe have submitted our reviews and comments for our submission. We would really appreciate it if you could give us feedbacks on the response, or at least the rebuttal acknowledgement. We hope to have an open and scientific conversation about our work!\n\nThank you for your time!",
" Thank you for the clarifications!",
" \nWe thank the reviewers for their insightful comments. We are encouraged that they find our proposed teleportation method novel, intuitive, and well-motivated. We are also glad that Reviewer W8uy likes the connection between our algorithm and second-order methods.\nWe have added individual response for each reviewer and updated the paper.\nWe would also like to summarize the comments shared among all reviewers and major revisions in the new version of the paper. \n\n* **Detecting symmetries in general optimization problems:** \nDetecting symmetries is not the focus of this paper. \nNevertheless, for neural networks, we did discover a novel set of symmetries (Proposition 4.3) for fully-connected layers. \nFor general optimization problems, we need to know the symmetries to use teleportation. This is usually not hard since we have access to the optimization function.\n\n* **Wall-clock time:**\nWe have added wall-clock time and hyperparameter sweep in Appendix E.2. \nThere we show that teleportation can indeed improve wall-clock time for convergence, in addition to reducing the number of epochs. \nAlso, wall-clock time can be optimized further via more efficient implementation of the teleportation steps. \n\n* **Guaranteed speedup:**\nWe have added a new Appendix D, in which we show that for quadratic convex optimization one teleportation is enough to guarantee faster convergence. We then develop a general condition for when one teleportation gives the optimal trajectory.\n",
" Thank you for your comments and positive feedback!\n\n> How to exploit symmetries for different loss functions?\n\nSince the symmetry comes from the architecture, we find the symmetries by observing what transformations of the parameters leave the loss function unchanged. For example, as shown in the paper, when there are two consecutive layers with elementwise activation functions, the neural networks admits a GL(R) symmetry. Another example is neural network that uses radial activation functions, where SO(n) is a symmetry. Kunin et al. (2021) also mention examples of symmetries in different neural networks include translational, scaling, and rescaling symmetry. Once we find the symmetry in the neural networks, we can apply teleportation by optimizing on the group element to improve convergence rate.",
" Thank you for the insightful comments and detailed suggestions! We have made the suggested edits in the paper and address the main questions below.\n\n**Response to weaknesses**\n> 1. It does not guarantee that the transformed parameters lead to a faster convergence rate throughout the entire training.\n\nIn the new Appendix D, we show that for a class of optimization problems (quadratic convex functions) teleporting once guarantees optimality at all future times.\nFor general problems, we can do multiple teleportations.\nThus, to observe faster convergence rate each teleportation only needs to increase $|d\\mathcal{L}/dt|$ until next teleportation. \nWe also provide a condition for when one teleportation gives the optimal trajectory. We consider a trajectory optimal if for every point on the trajectory, the magnitude of gradient is at a local maximum in the loss level set that contains the point. \n\n\n> 2. It takes time to find g.\n> 3. When the number of weights and data is large, it may increase the complexity.\n\nYes, the computational complexity of teleportation depends on the dimension of weights and the number of samples in a mini-batch. As shown in section 6.3, one teleportation step has the same complexity as back propagation (albeit with a different constant). Since we only teleport a few times during training, the additional time brought by teleportation is small compared to the speedup in convergence.\n\n\n**Response to questions**\n> 1. The meaning of ‘Acceleration’ should be stated clearly in this paper.\n\nBy “acceleration” we mean increasing $|d\\mathcal{L}(g \\cdot \\mathbf{w})/dt|= \\|\\nabla \\mathcal{L}(g\\cdot \\mathbf{w})\\|^2 $ the gradient norm and decreasing the number of epochs required to converge. We have added clarifications in the introduction. We also added empirical results of wall-clock convergence time in Figure 4d, 9, and 10. \n\n> 2. Does the initialization have an impact on the convergence and the final result through teleportation? How and why?\n\nYes, but teleportation reduces the effect of initialization on the convergence rate because it moves parameters in the landscape to a point with better gradient norm. If all points on a level set are reachable by a group action, and we teleport the parameters to the optimal point right after initialization, then initialization does not have an impact on the convergence. In practice, however, the group actions are usually not transitive, and we find the optimal teleportation destination by gradient ascent on a non convex function. Therefore, part of the effect of initialization remains.\n\n\n> 3. The proliferation of saddle points can slow down convergence. Can Symmetry Teleportation effectively escape saddle points?\n\nThat is an interesting point. Yes, teleportation can be an effective technique to escape saddle points. At saddle points, $d\\mathcal{L}(g \\cdot \\mathbf{w})/dt = 0$. By definition, teleportation moves parameters in the directions to increase $d\\mathcal{L}(g \\cdot \\mathbf{w})/dt$. Additionally, since teleportation can be applied at any time during training, we can teleport every time when we notice a decrease in $d\\mathcal{L}(g \\cdot \\mathbf{w})/dt$, which may help us move away from saddle points.\n\n\n> 4. Is there any general guideline for designing symmetry groups?\n\nYes, by observing the form of the optimization function. The choice of symmetry group depends entirely on the architecture. For example, as shown in the paper, when there are two consecutive layers with elementwise activation functions, the neural networks admits a GL(R) symmetry. Another example of continuous symmetry is a neural network that uses radial activation functions, where SO(n) is a symmetry. \n\n\n> 5. In some scenarios, GD with a learning rate schedule can also achieve a similar loss curve in Fig. 2(c). What advantages do you think GD with teleport has over GD with a learning rate schedule?\n\nTeleportation is an orthogonal technique to learning rate schedule. We can use teleportation in conjunction with learning rate schedule. In fact, since we have some idea of how the magnitude of gradient changes with teleportation (increases then decreases quickly), teleportation may benefit significantly from a learning rate schedule. \n",
" > 6. As far as I can see, GD in deterministic settings is slightly different from GD in stochastic settings. How do you guarantee that SGD+teleport can accelerate convergence over SGD?\n\nWhether teleportation accelerates the convergence of SGD depends on the data we use. \nThe expected change in loss and convergence is related to the variance in the data as well as minibatch sizes. In our experiments, we observe that even teleporting using a small number of data points (e.g. 80 images in MNIST) is able to improve the convergence for the objective function trained on the entire dataset. This suggests that the loss landscape created by samples of data is similar to the landscape created using all available data. \n\n\n> 7. There is a little abuse of notations.\n\nThanks for pointing these issues out. When using constant learning rates, we assume that $\\eta$ is a scalar multiple of the identity matrix. We have added clarifications in the paper and changed the notation for the number of steps to $t_{max}$.",
" Thank you for your comments! We appreciate the insights on local Lipschitz and smoothness constraints. We address the main questions below.\n\n> **Q1:** This leads us to the question of the existence of such teleportation, because the maximisation problem will also be unbounded. ... my main concern is that this teleportation technique can actually make convergence worse and much slower. Because of that, the method should be used much more carefully and with clear limitations to its applicability.\n\nThank you for pointing this out. \nWe agree that the method should be applied with care, but since we don't modify the loss landscape in any way, we still believe the method is applicable in general to any optimization problem having symmetries. \nTeleportation is simply like initializing at a point with steeper gradients. \nYou are correct that\nthe optimal group element can be unbounded, and teleportation can lead to divergence if we do not bound the parameters during optimization.\nWe propose two approaches to address this problem. \nFirst, we can stop the optimization process when the magnitude of gradient is larger than a pre-set threshold. Second, we can use adaptive learning rates. \nFor example, we can set a lower learning rate immediately after teleportation and increase to its original value after a few steps.\n\n\n> **Q2:** how complex the process of teleportation in general\n\nTeleportation is a simple gradient descent when the symmetry is known. \nIt is as easy as linear regression for linear symmetries and for an MLP with nonlinear activations, it only involves an extra function that implements the group action.\nAlso, note that we do not have to teleport all layers (e.g. we can only teleport the last two fully-connected layers in VGGNet). \nAs we showed in sec. 4.2, even neural networks with nonlinear activations can have continuous symmetries. \nDeriving the teleportation process only requires finding the transformation of parameters that leaves the loss unchanged (e.g. Proposition 4.3).\nIn our paper, the data is not transformed, so transforming parameters may involve finding the pseudoinverse of the output of specific layers, which is quadratic in the number of rows of layer output and linear in the minibatch size. As an example, the runtime complexity for a multi-layer neural network is given in section 6.3.\n\n\n> **Q3:** Some of the notations and objects were unclear\n\n$g \\cdot w$ denotes the group action of a group element $g$ on parameters $w$. We refer to $|d\\mathcal{L}(g \\cdot \\mathbf{w})/dt|$ as the \"convergence rate\", which is equal to the $L_2$ norm of $\\partial \\mathcal{L}(g \\cdot \\mathbf{w})/\\partial \\mathbf{w}$ in gradient flow. We have added a reference to basic group theory in the appendix.\n",
" \n **Q4:** questions about experiments\n\n> a) Figure 4: Why are SGD and AdaGrad starting from different points?\n\nWe show the loss after each epoch, so the loss after the first epoch appears different for SGD and AdaGrad.\nNote that we only compare SGD with SGD-teleport (not with AdaGrad), and compare AdaGrad with AdaGrad-teleport. \nTherefore, SGD and AdaGrad having different initial loss doesn't affect our results. \n\nWe noticed that teleportation was applied at different times for SGD and AdaGrad on MNIST. For consistency, we re-ran the experiment with teleportation in the same epoch and updated Figure 4. \n\n> b) How do you tune learning rates for gradient descent?\n\nWe performed hyper-parameter tuning using the validation set. We choose the learning rates such that further increases affect the converged value.\nIn our experiments, gradient descent with teleportation does not require a different learning rate from regular gradient descent. \nTeleportation improves convergence even when gradient descent uses different learning rates, as shown in Figure 3 and 8. \n\n> c) Figure 5: It seems that the fact that teleportation has bad validation accuracy proves my claim from question 1. \n\nSince teleportation is able to improve the train accuracy, we believe that the bad validation accuracy is not caused by the unbounded maximisation problem or inappropriate learning rates. As we noted in the experiment section,\na possible explanation would be a sharp local minima, which have large gradients but may generalize poorly. \nWe suspect using different or larger minibatches of data to do teleportation might alleviate this or reveal if sharp local minima are indeed the culprit. \n\n\n> d) Figure 3 and 8: It seems that the gradients of the teleported version of AdaGrad are smaller than regular AdaGrad. Despite this, the teleported loss is smaller than regular. How does it correspond with the dependence \"bigger gradient -> faster convergence\"?\n\nActually, the interpretation of the plots is opposite to this. \nLet us clarify. \nIn Figure 3b and 8b, at the same epoch , the gradients of the teleported versions are smaller. \nHowever, this can be explained by the faster convergence of the teleported versions, and that the magnitude of gradients is typically smaller when we are closer to the minima. This is proven by the fact that in Figure 3c and 8c, at the same loss values, the teleported versions have a larger $d\\mathcal{L}(g \\cdot \\mathbf{w})/dt$, meaning that teleportation helped find a better trajectory.\n\n\n> **Q5:** It will be interesting to understand what will happen with d-dimensional quadratic problems.\n\n\nGreat suggestion! \nIndeed we have exact results (Appendix D.1) proving that a single teleportation can find the fastest descending trajectory for convex quadratic loss in $n$ dimensions. \n\nIn brief,\nconsider a quadratic form $L_A(w) = \\frac{1}{2} w^T A w$, where $w \\in \\mathbb{R}^n$ is the parameter and $A \\in \\mathbb{R}^{n \\times n}$ is a diagonal matrix with positive diagonal elements. Then the level sets of $L_A$ are $n$-dimensional ellipsoids centered at 0, with axes in the same direction as the standard basis.\n\nThe gradient of $L_A$ is $\\nabla L_A = Aw$, and the magnitude of the gradient is $\\|\\nabla L\\|^2 = \\|Aw\\|$. The point with largest $\\|\\nabla L\\|^2$ on a level set is in the eigendirection of $A$ corresponding to its largest eigenvalue, or the point on the smallest semi-axes. \nThe gradient flow trajectory from this point always points to the global minima at 0. Therefore, like the 2D ellipse function, one teleportation on the $n$-dimensional ellipsoid also guarantees optimal convergence rate at all points along the trajectory. \n\nThis analysis can be made more general by considering any positive definite matrix $A$. A more formal statement and additional details can be founded in the new section D.1 in the appendix of the revised paper.",
" Thank you for your comments and insight on this field. \n\n> paper does not consider question of automatic symmetry detection and does not say anything about practical NN architectures, but this is very important to show that this technique at least can be applied to any practical task.\n\nNote that Section 4.2 addresses this question for multi-layer perceptrons (fully-connected layers) in NNs. \nThe key point is that for two consecutive layers with element-wise activation within any NN, no symmetry detection is needed. For defining the symmetry transformations in Proposition 4.3 one only needs\nthe model architecture.\nWe believe Proposition 4.3 can be generalized to other NN layer types (e.g. CNN or attention layers).\nIn neural networks, both the architecture and the loss function are known, so finding parameter-space symmetry does not require automatic symmetry detection which is often used in discovering symmetry in data. \n\n\n> there are no explicit convergence rate estimations in the paper, so the effect of teleportation cannot be appreciated.\n\nFor convex quadratic functions, we have added a result that one teleportation is guaranteed to improve convergence rate along the entire trajectory in Appendix D.1. \nWe do not have explicit bounds for global convergence rate, which is difficult to obtain for non-convex non-smooth problems. However, we did introduce conditions of when $d\\mathcal{L}/dt$ can be improved in the subsequent steps after a teleportation (Section 5.2). \nWe also provided intuitions of the convergence rate by relating teleportation to second order methods, and showed empirical evidence in test functions and MNIST classification. \n\n\n> it would be good to find at least one case (class of functions + class of symmetries) when GD with teleportation is competitive with GD/Conjugate gradients method/Newton method with explicitly expressed effect of teleportation in final convergence rate.\n\nIn the new Appendix D, we added a class of functions where teleporting once guarantees optimality at all future times. For general functions, we also provide a condition for when one teleportation gives the optimal trajectory. We consider a trajectory optimal if for every point on the trajectory, the magnitude of gradient is at a local maximum in the loss level set that contains the point. \n",
" **Response to the minor questions**\n\n> How did you settle on 10 steps for teleportation? How important is it to tune this number?\n\nWe hand tuned this number. The number of steps, together with the learning rate used to optimize the group element $g$, are chosen such that $d\\mathcal{L}(g \\cdot \\mathbf{w})/dt$ shows clear improvement but does not become large enough to cause divergence. \nWhen $d\\mathcal{L}(g \\cdot \\mathbf{w})/dt$ is not bounded on a loss level set such as in multi-layer neural networks, it is important to restrict the steps for teleportation to a finite value, although in practice the effect of teleportation is not sensitive to the number of steps.\n\nWe added a hyperparameter sweep to study the effect of hyperparameters on the speedup in computational time. In the new Figure 9, most hyperparameter combinations improve the convergence speed in wall-clock time.\n\n> Given your expertise, would you use symmetry teleportation as a default when you run optimization problems with known symmetries? Why or why not.\n\nYes, we would, though there are trade-offs to consider. \nSymmetry teleportation has a clear benefit of improving convergence rate and reducing training cost. \nHowever, the computational overhead of implementing teleportation and hyperparameter tuning may limit its advantages. \nFurther investigation is needed to find the best settings to apply teleportation. \n\n> Small suggestion for Figure 2 b and f: draw the symmetry teleportation trajectory. Right now some of the blue lines are gradient descent while others are teleportation + GD which makes it difficult to parse the details.\n\nThanks for the suggestion! We have updated that figure to make the teleportations clearer.\n\n> (line 62): dt $\\to $ dw\n\nWe are rewording this sentence. \nWe aimed to say: \"In comparison, we search within $G$-orbits for points which maximize $|d\\mathcal{L}(g\\cdot \\mathbf{w})/dt| = \\varepsilon^2 \\|\\nabla \\mathcal{L} (g\\cdot\\mathbf{w})\\|^2 $.\"\n",
" Thank you for your comments!\n\n**Response to the cons and main questions**\n\n> Runtime complexity is not fully reconciled / transparent. \n\n> Practically, how would Figure a look if it is plotted against number of gradient computations or even wall clock time? The runtime analysis per teleportation is great to have, but since this cost of teleportation is amortized (done only every so GD iterations), it isn't completely clear to me how this actually affects practice.\n\nThanks for the suggestion! We added Figures 3(d) and 4(d) that show loss vs. wall clock time for MLP. On randomly distributed data, teleportation slows down optimization, which is expected since the overhead is large for computing group actions using the entire training data. On MNIST, teleportation has negligible effect on training time since only a few mini-batches are used. The speedup with respect to wall clock time is therefore similar to the speedup observed with respect to the number of iterations.\n\n> In practice, symmetries must be known for a particular optimization problem, and even when known, in many cases I imagine optimizing within this group can be very difficult.\n\nWhile obtaining the complete set of symmetries of an arbitrary loss function is hard, finding a subset of the symmetries is often easy. For example, if there are two consecutive layers with element-wise activation, which is common in neural networks, we are able to teleport using the symmetry created by this structure alone.\nRegarding optimization within the symmetry group $G$, our method is tractable for the following reasons:\n* We do not seek a globally optimal $g\\in G$, instead $g$ only needs to improve $d\\mathcal{L}/dt$ until the next teleportation. \n* Since we work with continuous symmetry, small changes in the group element can be approximated to first order as $g \\approx I+\\varepsilon T$, where $T\\in \\mathfrak{g}$ is in the Lie algebra of $G$. This approximation avoids the expensive computation of the exponential map.\n* The constraints on $T$ are generally easy to satisfy. For example, when $G=GL_d(\\mathbb{R})$, $T$ can be any arbitrary $d\\times d$ matrix. \nThus, for $GL_d(\\mathbb{R})$ we can simply do gradient descent on the unconstrained optimization $\\max_g d\\mathcal{L}(g\\cdot \\mathbf{w})/dt$. \n\n\n> Does the theory behind convergence take into account the extra steps needed to solve for the symmetry teleportation problem? I'm wondering if the symmetry teleportation problem itself has a different convergence rate, and the real convergence rate would be worse?\n\nNo, the convergence rate of the optimization is decoupled with computational cost of teleportation steps in our analysis. In theory, teleportation improves the rate but may be more computational costly. However, since we do not teleport at every step, the cost of teleportation is amortized. We also show empirically that teleportation reduces the overall computation time in the new Figure 10.\n\n\n> Since only a first order approximation is used for exp(x), the output of the neural network isn't invariant anymore right? Moreover, if you take more teleportation steps, this should actually diverge or at least cause the parameters to be arbitrarily far away from the level set. What's the intuition behind this not occurring in practice; is it due to the fixed number of steps for optimizing the teleportation?\n\nUp to $O(\\varepsilon^2)$, the loss remains invariant. \nWhen making $k$ teleportation steps, as long as the $O(\\varepsilon^2)$ term is still negligible, the approximation error is not significant. \nWe show here that $O(\\varepsilon^2)$ terms are negligible when $k\\varepsilon \\ll 1$. \n\nThe Taylor expansion of the exponential map is $g=\\exp[\\varepsilon T]= I+\\varepsilon T +O(\\varepsilon^2)$. When applying $k$ steps of teleportation with $g_i = I+\\varepsilon T_i$ we have a total $g$ given by \n$$g = \\prod_{i=1}^k g_i = I + \\varepsilon \\sum_{i=1}^k T_i + \\varepsilon^2 \\sum_{i,j=1}^k T_iT_j + O(\\varepsilon^3) $$\n\nThe error introduced by the first-order approximation is $O(\\varepsilon^2)$, which is small since $\\varepsilon \\ll 1$.\nThe number of $O(\\varepsilon)$ terms is $k$ (one for each $T_i$) and number of $O(\\varepsilon^2)$ terms is $k(k-1)/2$ ($k$ choose $2$ of $T_i$). \nTherefore, as long as $(k-1)\\varepsilon/2 \\ll 1$, the higher order terms can be ignored. \n",
" This paper proposes moving along the level set during an optimization problem. Doing so may lead to faster gradient iterations, and is proven and shown empirically. A few examples of symmetries are discussed (modifications to the parameters that leave the output of the network invariant). For a nonlinear MLP, only an input-dependent symmetry is discussed. Runtime analysis is also performed. Pros:\n - Paper is well-written.\n - Problem setting (optimization of MLPs) is relevant.\n - Proposed solution is simple to understand.\n\nCons:\n - Runtime complexity is not fully reconciled / transparent.\n - Some details of the symmetry teleportation is not completely clear to me.\n - In practice, symmetries must be known for a particular optimization problem, and even when known, in many cases I imagine optimizing within this group can be very difficult. * Does the theory behind convergence take into account the extra steps needed to solve for the symmetry teleportation problem? I'm wondering if the symmetry teleportation problem itself has a different convergence rate, and the real convergence rate would be worse?\n\n * Practically, how would Figure a look if it is plotted against number of gradient computations or even wall clock time? The runtime analysis per teleportation is great to have, but since this cost of teleportation is amortized (done only every so GD iterations), it isn't completely clear to me how this actually affects practice.\n\n * Since only a first order approximation is used for exp(x), the output of the neural network isn't invariant anymore right? Moreover, if you take more teleportation steps, this should actually diverge or at least cause the parameters to be arbitrarily far away from the level set. What's the intuition behind this not occurring in practice; is it due to the fixed number of steps for optimizing the teleportation?\n\nMinor:\n\n * How did you settle on 10 steps for teleportation? How important is it to tune this number?\n\n * Given your expertise, would you use symmetry teleportation as a default when you run optimization problems with known symmetries? Why or why not.\n\n * Small suggestion for Figure 2 b and f: draw the symmetry teleportation trajectory. Right now some of the blue lines are gradient descent while others are teleportation + GD which makes it difficult to parse the details.\n\n * Type (line 62): dt -> dw .",
" Authors consider the class of optimization problems having group of symmetry G, which is simple enough to optimize norm of the gradient over its orbits, and propose technique of \"teleportation\" from current point to the more suitable one on the same orbit before every gradient step. They present the examples of problems having symmetries, including simplest NN, and numerically show the competitiveness of proposed technique in these simple cases and more complicated test problems (MNIST). New technique is provided with sketchy theoretical justification of acceleration effect. The idea of using group theory in optimization for solving problems with known symmetries is not new (I've found review with DOI:10.1007/978-1-4614-1927-3_9 almost immediately, but it is hardly the earliest), but this topic was not well discovered for continuous convex case. The first strength of the paper is that it attracts attention to this blank space. The reason why it is blank is the unnaturalness of group of symmetry oracle — symmetries that cannot be used to simplify the model itself are often too complicated to be used in auxiliary problems as well. This is related with first weakness — paper does not consider question of automatic symmetry detection and does not say anything about practical NN architectures, but this is very important to show that this technique at least can be applied to any practical task. The second weakness is that sketch of theoretical analysis is too sketchy, there are no explicit convergence rate estimations in the paper, so the effect of teleportation cannot be appreciated. Regarding second weakness — it would be good to find at least one case (class of functions + class of symmetries) when GD with teleportation is competitive with GD/Conjugate gradients method/Newton method with explicitly expressed effect of teleportation in final convergence rate. It seems to be possible. Limitations seem to be obvious for the readers of the paper, and authors do not pretend to propose the panacea. ",
" Using teleportation of parameters to a new point at the same loss value but with a larger gradient norm, the authors offer a new approach for accelerating the convergence of gradient-based optimization methods. This new scheme can accelerate the convergence of gradient-based optimization methods. An intuitive understanding of the connection between such a scheme and second-order methods is demonstrated by the authors. The major application discussed in the paper are deep neural networks and two rather simple classical functions. An empirical evaluation of the effectiveness of the authors' approach to solving these problems was provided, along with a comparison to gradient descent and AdaGrad. The symmetry teleportation technique is the main idea and contribution of the paper. It is based on the fact that after teleportation, the loss will be the same, but the landscape can be much better, and hence we will get faster convergence to the local minima. I liked the presented method's relations with second-order methods both theoretically and practically on Booth function. The idea is interesting and original. Some proofs are presented, and they seem to be correct. The authors show some experiments for small feed-forward networks with leak-relu activations on MNIST. The paper is well-structured and well-written. I have some concerns about the correctness and effectiveness of the teleportation technique presented in the paper. I will address them in the questions section. 1) There are two main assumptions for unconstrained optimization problems: $L$-smoothness (Lipschitz-continuous gradient) and $M$-Lipschitz-continuous function. Both are defined globally but we are actually using them locally on the set around the starting point $x_0$ and the solution $x_{\\ast}$. The reason why we can do it is the monotonicity and compactness of gradient-based methods. With that local approach, we can make our function class much broader. For example, x4 is not $L$-smooth or $M$-Lipschitz, but we can constrain $L$ locally by some constant dependent on $x_0$, and the gradient method will converge for such a $L$. Note that the presented Rosenbrock function is not $L$-smooth or $M$-Lipschitz globally. The same problem arises for neural networks. Additionally, level sets can be unbounded. This leads us to the question of the existence of such teleportation because the maximization problem will also be unbounded. For example, one of the simplest examples of neural networks is a linear 2-layer network without activations for one data-point $(1,1)$ with MSE-loss. $\\min (x \\cdot y - 1)^2$. All points $x = 1/y$ are local minima. Let us start from parameters $a = (2,1) $. The gradient norm at $b = (1000, 0.002)$ is bigger than the gradient norm of $a$. So, we'll teleport to $b$, but why is this location superior? Furthermore, because of smoothness, the learning rate at $b$ should be less than $1/(1000^2+0.002^2)<10^{-6}$, whereas the learning rate at $a$ is around $1/5$. So, my main concern is that this teleportation technique can actually make convergence worse and much slower. Because of that, the method should be used much more carefully and with clear limitations to its applicability. \n\n2) It is unclear to me, how complex the process of teleportation is in general. \n\n3) Some of the notations and objects were unclear to me. So, I would recommend adding a notation section to the appendix to clarify the introduced notation. For example, what does $g \\cdot w$ mean? Why are you using $dL/dt$ and not $dL/dw$? Some definitions of used groups or links to the literature where the reader can be educated about them will also be helpful. \n\n4) In experiments,\na) Figure 4: Why are SGD and AdaGrad starting from different points?\nb) How do you tune learning rates for gradient descent? Do \nc) Figure 5: It seems that the fact that teleportation has bad validation accuracy proves my claim from question 1.\nd) Figure 3 and 8: It seems that the gradients of the teleported version of AdaGrad are smaller than regular AdaGrad. Despite this, the teleported loss is smaller than regular. How does it correspond with the dependence \"bigger gradient -> faster convergence\"? \n\n5) The authors showed intuition about two-dimensional quadratic problems (ellipsoid). It will be interesting to understand what will happen with d-dimensional quadratic problems.\n\n#######################\nDuring the rebuttal, the authors addressed most of my concerns and questions. They improved the paper. Special thanks for the new Section D.1 about quadratic problems. I increased my overall score to 6. The authors adequately addressed the limitations.",
" Existing gradient-based optimization methods update the parameters locally, in a direction that minimizes the loss function. The author proposes a different optimization method to improve the convergence speed by teleportation which transforms parameters while making the loss invariant. The author gives theoretical proof and experimentally shows that teleportation improves the convergence speed of GD and AdaGrad for several optimization problems. Strengths:\n1. The teleportation improves the convergence speed of subsequent steps by transforming parameters to another point with steeper gradients.\n2. The symmetry teleportation takes advantage of higher-order landscape geometry but uses only gradient information.\n\nWeaknesses\n1. It does not guarantee that the transformed parameters lead to a faster convergence rate throughout the entire training.\n2. It takes time to find g.\n3. When the number of weights and data is large, it may increase the complexity.\n 1. The meaning of ‘Acceleration’ should be stated clearly in this paper.\n2. Does the initialization have an impact on the convergence and the final result through teleportation? How and why?\n3. The proliferation of saddle points can slow down convergence. Can Symmetry Teleportation effectively escape saddle points?\n4. Is there any general guideline for designing symmetry groups?\n5. In some scenarios, GD with a learning rate schedule can also achieve a similar loss curve in Fig. 2(c). What advantages do you think GD with teleport has over GD with a learning rate schedule?\n6. As far as I can see, GD in deterministic settings is slightly different from GD in stochastic settings. How do you guarantee that SGD+teleport can accelerate convergence over SGD?\n7. There is a little abuse of notations. The learning rate, $\\eta$, and the learning rate matrix, $\\eta$. The steps, T, and the transformation matrix, T.\n\nTypos\n\nLine 73 leads -> lead\n\nLine 88 flow -> flows\n\nLine 130 are -> is\n\nLine 227 reduces -> reduce\n\nLine 272 has -> have\n\nLine 285 has -> have\n\nLine 315 minima -> minimum\n\netc. Yes",
" This work proposes an accelerated gradient-based optimization algorithm, symmetry teleportation, which exploits symmetries in the loss landscape. It improves current neural teleportation methods by searching for teleportation destinies that lead to the largest improvement in the magnitude of gradient. This work also provides empirical evidence showing the improved convergence speed of gradient descent and AdaGrad in different optimization problems. Strength: \n1. This work proposes a novel optimization algorithm that applies symmetric teleportation to accelerate the convergence of gradient descent.\n2. The problem is well-motivated, and the work also provides an intuitive illustration of the ideas behind the proposed algorithm.\n\nWeakness:\n1. It would be better if the work could explain more about how to exploit symmetries for different loss functions. How to exploit symmetries for different loss functions? There are no potential negative societal impacts of the work."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3,
2
] | [
"qM8-KTuGbNq",
"-sA4iLaouQ",
"oHu4Y0LHYxW",
"nips_2022_MHjxpvMzf2x",
"XohTGspg1LGp",
"nips_2022_MHjxpvMzf2x",
"RbJ68Cxazp-",
"tRMSpyjeJzS",
"tRMSpyjeJzS",
"YQQkHBaa9Bz",
"YQQkHBaa9Bz",
"QrmutSIE2EB",
"uDdwTYuLwwq",
"uDdwTYuLwwq",
"nips_2022_MHjxpvMzf2x",
"nips_2022_MHjxpvMzf2x",
"nips_2022_MHjxpvMzf2x",
"nips_2022_MHjxpvMzf2x",
"nips_2022_MHjxpvMzf2x"
] |
nips_2022_y8FN4dHdxOE | Fused Orthogonal Alternating Least Squares for Tensor Clustering | We introduce a multi-modes tensor clustering method that implements a fused version of the alternating least squares algorithm (Fused-Orth-ALS) for simultaneous tensor factorization and clustering. The statistical convergence rates of recovery and clustering are established when the data are a noise contaminated tensor with a latent low rank CP decomposition structure. Furthermore, we show that a modified alternating least squares algorithm can provably recover the true latent low rank factorization structure when the data form an asymmetric tensor with perturbation. Clustering consistency is also established. Finally, we illustrate the accuracy and computational efficient implementation of the Fused-Orth-ALS algorithm by using both simulations and real datasets. | Accept | The paper proposes a multi-mode tensor clustering method using an alternating least square algorithm. The reviewers unanimously like the paper and the content. Some reviewers have sought a few clarifications including on the proofs. Hope the authors would address them in the revised version. | train | [
"vjcLA6_r0xv",
"lVs6Tp9I0K",
"7-xe_M4_DkV",
"bI7XrHAU9Iz",
"rPDxU5i67DES",
"lYaxkDJD_Rxp",
"1qtseNrM0Yx",
"PyIh0UWp0Xd",
"W2xFDUDWsok",
"e_sOn_wSflx",
"fI4lyJFdDQq",
"I2VW2HAMJ3K",
"JaR9eozPdWx",
"OnYlF6nvpIt"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Authors, thank you for the detailed response.\nI am satisfied with the explanation provided for my questions. I stick to my original rating. ",
" **On the sixth issue** For the derivation of Line 100, the whole error bound can be split into 2 parts, the first part is in the order our final result of Theorem 1, which is $\\gamma\\rho^2(K-1) + \\psi/w_{\\min}$; the second part is $12\\sqrt{2}w_{\\max}/w_{\\min}\\tilde{q}$. Recall the defination of $\\tilde{q}$ from Line 40, $\\tilde{q} = \\alpha\\epsilon_0 + 2\\rho(K-1)$. Thus, to show $12\\sqrt{2}w_{\\max}/w_{\\min}\\tilde{q} < 1$, it's equivalent to show $12\\sqrt{2}w_{\\max}/w_{\\min}(\\alpha\\epsilon_0 + 2\\rho(K-1))<1$. We transfer this as a condition on the initialization $\\epsilon_0$, i.e., $\\epsilon_0< 1/(12\\sqrt{2}\\gamma\\alpha) - 2\\rho(K-1)/\\alpha$. We have included this initialization condition in Assumption 2 Line 168. \n\n\n**On Line 191 intuition of $\\gamma$** Our intuition is as follows: as $\\gamma$ increases, WLOG can assume $w_1\\leq w_2 ... \\leq w_K$, under the constraint that the spectral norm of $\\mathcal{Y}^*$ is bounded, the minimum 'signal' $w_1$ is small which makes the latent factors difficult to recover (imagine the extreme case when $w_1\\approx 0$, $w_1 \\boldsymbol A_{:1}\\circ \\boldsymbol B_{:1}\\circ \\boldsymbol C_{:1}\\approx 0$ and thus $\\boldsymbol A_{:1}, \\boldsymbol B_{:1}, \\boldsymbol C_{:1}$ are difficult to detect). We plan to add more experiments in the revised manuscript to validate this with a goal of illustrating a different recovery error when $\\gamma$ takes {large, medium, small} values. \n\n**On large standard deviation of $\\mu=0.1$ in table 1** Our results stated in Table 1 are exactly the experimental results of 50 replications with multiple random starts. Our intuition of large standard deviation for $\\mu = 0.1$ is that the distance between four clusters in $\\boldsymbol C$ is smaller for $\\mu=0.1$ compared to $\\mu=0.7$, which increases the difficulty of clustering tasks. \n\n**On readability and typos**\nThe reviewer provided a lot of excellent suggestions for improving readability. We will use all of them in the revised version.\n",
" We sincerely appreciate the constructive feedback from the reviewer which will lead to a much improved paper! \n\n**On the first issue** We apologize for the typos which are now corrected. As the reviewer noted, we plan to split $II_{12}$ into two parts, the first part is $\\Lambda$ (in our original notation, $\\Lambda$ is a scalar and we will modify it in the appendix, which should be $\\Lambda = a\\mathbf{1}$) which can be expressed as a bounded scalar $a$ times a vector of 1s with the conformable size of $\\boldsymbol C_{:l}$; the second part is exactly what the reviewer describes $\\nu_0w_l\\boldsymbol C_{:l}$. Since, in the later analysis when $\\Lambda$ appears, we only care about the upper bound on its $\\ell_2$ norm (in equations 12, 13, 14), changing $\\Lambda$ to bounded scalar $a$ times vector of 1 will not affect our final convergence results for factors $\\boldsymbol C_{:l}, l>1$. We thank the reviewer for pointing out the unclear aspects in the proof of Theorem 1 and below are some updates we will revise in the appendix:\n1. We revised the definition $II_{12} = (K-1)\\Delta \\Vert II_{12}^\\prime\\Vert_2\\mathbf{1}+ (K-1)\\Delta(\\xi+\\rho)w_i\\boldsymbol C_{:i}$ with $\\Vert\\Delta II_{12}^\\prime\\Vert_2\\leq \\Vert i_1\\Vert_2 +\\Vert i_2\\Vert_2 +\\Vert i_3^\\prime\\Vert_2 + \\Vert i_4^\\prime\\Vert_2$$\\leq \\xi\\epsilon_0w_{\\max}\\alpha + (\\epsilon_0+\\xi)\\rho(K-1)w_{\\max} + \\rho^2(K-2)w_{\\max} + (\\epsilon_0+\\rho)w_\\max$. \n2. $\\xi$ is upper bound for $\\max_{j<i}\\Vert \\xi_j\\Vert_2$ which is upper bounded by $\\xi \\leq 10\\gamma\\alpha K/\\sqrt{d}$ (The result derived in Lemma 3).\n\nWe hope the current version for the related parts in the proof is now clear. \n\n**On the second issue** In our bound, choices for $j_1$ and $j_2$ do not have a large effect on the final bound of $II_{14}$ since we take the uniform bound for $\\sum_{j_1\\leq i}\\bar{\\boldsymbol A}_{:j_1}^\\top \\boldsymbol A_{:i}$ and $\\sum_{j_2\\leq i}\\bar{\\boldsymbol B}_{:j_2}^\\top \\boldsymbol B_{:i}$, i.e. $(K-1)\\Delta$.\n\nThus, the relevant term in $ii_4$ that we care about is \\max_{j_1<i, j_2<i}\\mathcal{Y}^*(\\boldsymbol A_{:j_1}, \\boldsymbol B_{:j_2}, I). A detailed analysis could be: (1) if the maximum is attained when $j_1\\neq j_2$, the upper bound for $\\Vert ii_4\\Vert_2$ is exactly what we derived in the appendix. (2) if the maximum is attained when $j_1 = j_2$, $\\Vert ii_4\\Vert_2\\leq (K-1)\\rho^2w_{\\max} + w_{\\max}\\rho$. Since the difference between these two bounds is just the difference between $\\rho^2 w_{\\max}$ and $\\rho w_{\\max}$, it can be easily proved that $\\rho^2 w_{\\max} < \\rho w_{\\max}$ due to Assumption 1 where $\\rho\\leq \\alpha/\\sqrt{d}$ is an incoherence parameter which relaxes the orthogonalization condition on the latent factors. Thus, the upper bound on $\\Vert ii_4\\Vert_2$ under the scenario (1) $j_1\\neq j_2$ is a loose bound compared to the scenario (2) $j_1 = j_2$. That's the reason we don't state the result of the second scenario. \n\nWe will add a discussion on the bound of $\\Vert ii_4\\Vert_2$ in the final version of the appendix and thank the reviewer for pointing this out to allow us to make this important clarification!\n\n**On the third, fourth and fifth issues** We are grateful to the reviewer for raising these concerns and we apologize for not being clearer. Because of the close relationship of these three issues, we combine them and give some short answers here. We will add a more detailed derivation on upper bound of $\\Vert \\Lambda\\Vert_2$ in terms of $\\eta_x$, lower bound on the denominator of $II_{1}$ and final upper bound on $\\Vert II_{1}\\Vert_2$ in the revised appendix. \n1. On the detailed definition of $\\eta_x$, we have done some reorganization on $\\eta_2, \\eta_3$ using the relationship between $\\epsilon, \\rho, (K-1)\\Delta, \\xi$ which will make the derivations look clearer. Details will be added in the revised version.\n2. In terms of Line 91, the short answer is that we follow the similar proof strategy as in step 1 (which is the convergence error bound for the first factor $\\boldsymbol C_{:1}$, detailed procedure is in proof of Theorem 3 (step 2) in Sun & Li, 2019). \n3. In terms of derivation for equation 13, we don’t neglect the term $\\eta_0 - (1-\\epsilon_0^2)$. The upper bound for $\\ell_2$ norm of the whole numerator can be simplified as $2\\Vert\\Lambda\\Vert_2 + (\\eta_0+1+\\epsilon_0^2)w_{\\max} + \\psi$. $2\\Vert\\Lambda\\Vert_2\\leq 2w_{\\max}(\\alpha\\epsilon_0^2 + \\epsilon_0\\rho(K-1) + \\rho^2(K-2))$ which is closely related to definition of $f(\\epsilon_0, \\rho, K)$ defined in appendix Line 34. Thus, the remaining work is to show $\\eta_0 - (1-\\epsilon_0^2)\\leq 2(\\alpha\\epsilon_0^2 + \\epsilon_0\\rho(K-1) + \\rho^2(K-2))$ which can be proved under the initialization condition on $\\epsilon_0$.\n ",
" I have spent some more time on the proof of Theorem 1 and have further questions and concerns. I understand that two things are done at the same time in the proof: write $II_1$ as a sum of two vectors, one proportional to $w_i \\mathbf C_{: i}$ and $\\boldsymbol \\Lambda$, and find an upper bound on the norm of $\\boldsymbol \\Lambda$. Let's write the first vector as $\\nu_0 w_i \\mathbf C_{: i}$. \n\nThe first issue is that the first term $\\nu_0 w_i \\mathbf C_{: i}$ needs to be exact, but some upper bounds are used in lines 72, 77, and 78. Norms and vectors are mixed, and some inequalities do not make sense, like in lines 79 and 82 (they should be comparing norms and not vectors). Furthermore, all $\\xi$ appearing in bounds should be $\\Vert \\xi \\Vert_2$.\n\nThe second (probably minor) issue is the upper bound of $\\Vert II_{14} \\Vert_2$. Equation (11) is true when $j_1 \\neq j_2$, when summing, the equality case appears $(i-1)$ times and modifies the upper bound:\n$\\Vert II_{14} \\Vert_2 \\leq (K - 1) \\Delta^2 w_\\text{max} [(K - 1) (\\xi^2 \\alpha + \\rho^2 + 2 \\xi + 2 \\rho \\xi (K - 1)) + (K - 2)^2 \\rho^2 + 2 \\rho (K - 2) + 1]$.\n\nThe third issue concerns $\\boldsymbol \\Lambda$ and the coefficients $\\eta_x$. Indeed, $2 w_i \\epsilon_0$ from the bound of $\\Vert II_{1 1} \\Vert_2$, $2(K - 1) \\Delta (\\epsilon_0 + \\rho) w_\\text{max}$ from the bounds of $\\Vert II_{1 2} \\Vert_2$ and $\\Vert II_{1 3} \\Vert_2$, and $2 (K - 1)^2 \\Delta^2 (\\rho + \\xi) w_\\text{max}$ from the (uncorrected) bound of $\\Vert II_{1 4} \\Vert_2$ are not present in $\\boldsymbol \\Lambda$. Moreover, in $\\eta_2$, the term $2 \\epsilon_0 \\rho (K - 1) w_\\text{max}$ from the bound of $\\Vert II_{1 1} \\Vert_2$ became $ \\epsilon_0 \\rho (K - 1) w_\\text{max}$ and the term $\\rho^2 (K - 1) w_\\text{max}$ from the same bound became $\\rho^2 (K - 2) w_\\text{max}$ in $\\eta_3$. This last mistake should not change anything.\n\nThe fourth issue is just a question: how do you get the equation of line 91? I think that this is based on the fact that the numerator of $II_1 = \\nu_0 w_i \\mathbf C_{: i} + \\boldsymbol \\Lambda$ and that $\\mathcal Y = \\mathcal Y^* + \\mathcal E$ but I did not manage to retrieve the bound, can you help me on that? I also think that there is a mistake, and in the following lines, $w_\\text{max} \\Vert \\boldsymbol \\Lambda \\Vert_2$ should be replaced with $ \\Vert \\boldsymbol \\Lambda \\Vert_2$. I also was not able to retrieve the inequalities of line 94. Giving an upper bound of (K - 1) \\Delta could help.\n\nThe fifth issue is again a question: how do you handle the term $| \\eta_0 - (1 - \\epsilon_0^2) |$? Doing the computations, I find (13) from (12) by neglecting this term.\n\nThe sixth issue: you conclude in line 100 with the desired upper bound + $\\beta \\epsilon_0$ ($\\epsilon_0$ is missing, by the way) with $\\beta \\leq 1$. To get the final result, you have to iterate and need $\\sum_n \\beta^n < \\infty$, but this is only true if $\\beta < 1$. Can you prove that?\n\nI may have missed things or made mistakes, so do not hesitate if you disagree with the previous points. \n\n\nMiscellaneous minor comments:\n- line 71: there is still an extra equal sign in the definition of the spectral norm.\n- line 121: $\\mathbf u$ should be one of the arguments instead of $\\mathbf u$ in the definition of the Fuse operator.\n- line 147: to be consistent with line 96, $\\mathbf i = [i, i + 1]$ should be replaced with $(i, i + 1)$.\n- line 191: it is coherent now, but do you have an intuition why the lower bound is better if the weights are similar?\n- line 209: just before Theorem 2, true cluster means are written $\\boldsymbol \\mu$, and in Theorem 2, they are written $\\boldsymbol \\mu^*$. Keeping the lighter version is probably better.\n- in Table 1, the standard deviation is relatively high in the $\\mu = 0.1$ column. Does this remain true with multiple random starts?\n\n- in $II_{14}$, $j_1$ should be used for $\\mathbf A$ and $j_2$ should be used for $\\mathbf B$.\n- in the definition of $\\Delta$, why is the second line true, and why do we need to replace $\\mathbf A_{: i}$ with $\\mathbf A_{: i} + \\hat{\\xi}_i$? \n- Appendix line 72: forgetting that we are comparing vectors, a maximum over $j$ should appear.\n- Appendix lines 77 and 78: sums should be over $l \\neq i$.\n- Appendix line 84: it corresponds to $ii_3$.\n",
" We sincerely appreciate the hard work and time of the reviewer on the revised supplementary and manuscript!\n\nYes, the proof process of theorem 1 requires the induction result of Lemma 3 and Lemma 3 takes the result of Theorem 1 as assumption. Our original concern is incorporating the proof of Lemma 3 into Theorem 1 might give the reader a sense of confusing since we switch the gear a lot and take some effort to show the orthogonalization does not affect the factors which have not been recovered but ensures that the mth estimate never has high correlation with the factors which have already been recovered. But now it seems combining those two parts of proof (Theorem 1 and Lemma 3) makes more sense and is easier to understand. We thank your constructive comments and will modify it in the manuscript. \n\nThanks!",
" I thank the authors for their explanations and for considering my comments. I will dive deeper into the new version of the manuscript in the following days but can make the following remark so far.\n\nI now think that the proofs of Theorem 1 and Lemma 3 are sound, even if I need extra time to check the details carefully. However, I think the way it is written is confusing. I believe that merging Lemma 3 into Theorem 1 would make things easier. Indeed, in the proof of Lemma 3, line 162, it is stated that $\\Vert \\hat{\\xi}_p \\Vert_2 \\leq 6 \\sqrt{2} (\\alpha^2 + 1) K \\gamma / d$, however, this can only be proven using the induction hypothesis and is the main result of the proof of Theorem 1. On the other hand, in the proof of Theorem 1, in lines 66 and 75, Lemma 3 is invoked, but only the induction hypothesis is required. What do you think of this suggestion?",
" We thank all reviewers for their detailed feedback and constructive suggestions! They will contribute to a much improved final version.\n\nWe appreciate all comments pointing out insufficiencies and providing ideas for future improvements. Below we answer each reviewer’s questions and issues raised as separate comments. \n",
" We sincerely appreciate the hard work of the reviewer who provided an extremely detailed and helpful report. \n\n*Additional references:* \n\nAnandkumar A, Ge R and Janzamin M (2014) Guaranteed Non-Orthogonal Tensor Decomposition via Alternating Rank-1 Updates. arXiv:1402.5180 .\n\nSun WW, Lu J, Liu H, and Cheng G (2017) Provable sparse tensor decomposition. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79, 3, 899–916\n\nSun WW and Li L (2019) Dynamic tensor clustering. Journal of the American Statistical Association, 1–28\n\nTomioka R and Suzuki T (2014). “Spectral norm of random tensors.” arXiv:1407.1870 \n\nSharan V and Valiant G (2017). “Orthogonalized ALS: A theoretically principled tensor decom-\nposition algorithm for practical use.” Proceedings of the 34th ICML, 3095–3104. \n",
" We thank the reviewer for the very detailed report. The reviewer raised an important issue on Theorem 1 that will determine the final score and the acceptance of paper. We clarify the issue below and hope that our answer is satisfactory.\n\n*On proof of Theorem 1 and connection to Lemma 3:*\n\nWe thank the reviewer for pointing out the unclear aspects in the proof of Theorem 1 and its connection to Lemma 3. We agree with the reviewer assessment, and in fact have already (after submission) reorganized the appendix to flow more smoothly and clarify the raised issue. We think it’s much easier to read now and we are sorry that you had to struggle through the old version. In short, the Lemma 3 assumption is for the factors that have already converged and the bound is in the same form as the bound of Theorem 1 for estimated factors.\n\nThe strict mathematical derivations have been provided in the revised appendix and we provide a brief explanation of the proof logic here: we mimic the proof idea of Lemma 3 in Sharan, V. and Valiant, G. (2017). We prove that Fused-Orth-ALS recovers the remaining factors by induction. The first step is the base case (which is proved in the first half part of appendix A) and the algorithm recovers the first factor. We next show that if the first $(m−1)$ factors have converged, then the $m$th factor converges with high probability. The main idea is that as the factors have small correlation with each other, hence orthogonalization does not affect the factors which have not been recovered but ensures that the $m$th estimate never has high correlation with the factors which have already been recovered. The key idea for Lemma 3 is that the orthogonal basis is close to the original factors as the factors are incoherent. We have modified Lemma 3 in the updated appendix and we believe that the modified appendix addresses your concern.\n\n*On readability and typos:*\n\nThe reviewer provided a lot of excellent suggestions for improving readability. We will use all of them in the revised version. Some notes on a couple of the suggestions are below:\n\n*On validity of assumptions in real and simulated datasets:* Please note that some assumptions are common in tensor decomposition papers, and some can only be validated with high probability under some special cases.\n\n- Assumption 1: incoherence condition is commonly used in the tensor decomposition literature (Anandkumar et al., 2014; Sun et al., 2017; Sun and Li, 2019) which relaxes the requirements on the orthogonality of columns in factor matrices. Anandkumar et al. (2014) provides detailed proof that it is satisfied if columns of factor matrices are uniformly and independently drawn from the unit sphere. The additional constraint on the spectral norm of factor matrices can be proved to be satisfied with high probability using the bounded tensor spectral norm (Tomioka and Suzuki (2014)). \n- Assumption 2 restricted initialization for Fused-Orth-ALS algorithm to be related with weights ratio $\\gamma$, rank $K$ and dimension $d$. Please note that this assumption is only needed for computational issues; detailed explanations for each term can be found in appendix A. \n- Assumption 3 bounds the perturbation level in terms of the spectral norm of error tensor $\\psi$. This can be satisfied with high probability if each element in the error tensor $\\mathcal{E}$ follows an i.i.d sub-Gaussian distribution as stated in Tomioka and Suzuki (2014). \n- Assumption 4: as we explained in Line 175-181, it puts some restrictions on the clustering complexity of factor matrices, which is usually difficult to verify in a real dataset. \n\nWe believe this is a direction for future work from us and from the community of researchers focusing on this topic.\n\n*On the constant $M$ in assumption 4:* We apologize for not defining $M$ before discussing A4. It is introduced in the condition on tuning parameter choice $\\lambda$ in Theorem 1, where $M$ is the maximum $\\ell_2$ norm of the columns of pairwise difference operator over rows of $C$. We will add this detailed explanation before introducing the assumptions in the revised manuscript.\n\n*On condition of $w_\\min$:* We interpret $w_\\min$ as the signal strength. Since the condition in Line 200 (Corollary 1) is a lower bound order on $w_\\min$, it is hard to practically verify it. We will include additional experiments in the appendix to show how the recovery and clustering performance get worse when the signal strength $w_\\min$ gets weak. \n\n*On the convergence of the algorithm and monotonicity of the objective function (in Limitations):* The reviewer raised an important aspect on the monotonicity of the objective function as the algorithm progresses. In the experiments we performed, the objective function does not decrease monotonously (not a convex function). The plots of the objective function versus iteration number shows an overall decreasing trend with small local oscillations.\n",
" We thank the reviewer for the constructive comments, positive feedback, and supportive rating.\n\n*On the intuition for choosing the CP model:* We choose CP decomposition model based on the following considerations:\n\n1. Compared to the Tucker decomposition, CP decomposition is unique under weaker conditions where by uniqueness, we mean for tensor $\\mathcal{Y}\\in\\mathbb{R}^{n_1\\times n_2\\times n_3},$ there is only one possible combination of rank-one tensors $\\sum_{r=1}^R a_r\\circ b_r\\circ c_r$ that sums to $\\mathcal{Y}$, with the exception of elementary indeterminacies of scaling and permutation. Kruskal's result provided a sufficient condition for uniqueness and Ten Berge and Sidiropoulos (2002) showed the sufficient condition is also a necessary condition when $R=3$. Later papers considered more general necessary conditions that work for the case when $R>3$. On the contrary, Tucker decomposition is not unique. If we let $U,V,W$ be non-singular matrices, then $\\mathcal{G}\\times_1 A\\times_2B\\times_3 C = (\\mathcal{G}\\times_1 U \\times_2 V \\times_3 W)\\times_1 AU^{-1}\\times_2 BV^{-1}\\times_3 CW^{-1}$. In other words, we can modify the core $\\mathcal{G}$ without affecting the fit so long as we apply the inverse modification to the factor matrices.\n2. Since the tensor Gaussian mixture model can be viewed as a special case of our proposed model (please refer to explanation on Line 91), we choose CP decomposition as backbone to uncover the clustering structure. Our method does not impose a distributional assumption on the error tensor $\\mathcal{E}$. However, if assuming the error tensor follows a standard Gaussian distribution, then the proposed model reduces to a tensor version of the Gaussian mixture model, which enjoys the advantage that they do not require which subpopulation a data point belongs to and allows the model to learn the subpopulations automatically. \n3. Lastly (but not least), we do not want to overemphasize the advantage of the CP decomposition method since there are lots of excellent multiway tensor clustering methods based on both decomposition models, CP and Tucker. In our opinion, it's not a yes or no question in terms of which decomposition method is implemented since they are just two different paths to explore clustering. \n\nWe will add more discussion to this important point (choice of CP) in the final version of the manuscript.\n\n*On CP assumptions in real applications:* \n\nThe assumptions we impose are common (and relevant) for the applications the methods are intended for. One example is the multi-tissue, multi-individual gene expression experiment where the data take the form of an order-3 tensor with three modes representing genes, individuals, and tissues. For simplicity, we denote the latent factors for genes, individuals, and tissues as $G_r, I_r, T_r$ respectively. The rank-1 component under CP assumption in this example $G_r\\circ I_r\\circ T_r$ can be interpreted as the basic unit of an expression pattern (called an expression module). Many papers on this genetics topic adopt this assumption and we list two of them: Hore et al. (2016), Wang et al. (2019). These additional references will be part of the final version of the manuscript. \n\n*On speed up convergence of orthogonalization:* \n\nIntuitively, the periodic orthogonalization prevents multiple recovered factors from “chasing after” the same true factors, allowing for the avoidance of poor local optima and more rapid convergence to the true factors. Theoretically, we proved the upper bound for recovered factors, which enjoys consistency when some mild assumptions are imposed on error and $w_\\min$ (Corollary 1). We also did several experiments to show the quicker convergence of adding orthogonalization. Results are provided in Figure 2(d) where the experiments are based on the comparison of dynamic tensor clustering algorithm (which adopts the similar CP decomposition assumption, but the algorithm does not contain orthogonalization step) and our proposed method. That brings clear evidence that adding orthogonalization speeds up the convergence. We thank the reviewer for raising this issue.\n\n*Additional references:*\n\nKruskal JB (1977) Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear algebra and its applications, 18(2), 95-138.\n\nKruskal JB (1989). Rank, decomposition, and uniqueness for 3-way and N-way arrays. In Multiway data analysis (pp. 7-18).\n\nTen Berge JM & Sidiropoulos ND (2002). On uniqueness in CANDECOMP/PARAFAC. Psychometrika, 67(3), 399-409.\n\nHore V, Vinuela A, Buil A, Knight J, McCarthy MI, Small K, & Marchini J (2016). Tensor decomposition for multiple-tissue gene expression experiments. Nature Genetics, 48(9), 1094-1100\n\nWang M, Fischer J & Song YS (2019). Three-way clustering of multi-tissue multi-individual gene expression data using semi-nonnegative tensor decomposition. The Annals of Applied Statistics, 13(2), 1103.\n",
" We thank the reviewer for the constructive comments and supportive rating. We believe that our responses clarify the issues the reviewer raised.\n\n*On the advantages of orthogonalization:*\n\nWe performed several experiments to show the quicker convergence of adding orthogonalization. Results are provided in Figure 2(d) where the experiment is based on the comparison of dynamic tensor clustering algorithm (which adopts the similar CP decomposition assumption, but the algorithm does not contain orthogonalization step) and our proposed method. It provides strong evidence that adding orthogonalization truly speeds up the convergence. We thank the reviewer’s constructive feedback and we will include additional experimental results to compare the ALS based algorithm with/without orthogonalization in the revised appendix. \n\n*On the choice of hyperparameters and robustness:*\n\nWe fully agree that multiple hyperparameters make it difficult to implement the algorithm. During our experiments, we found that several techniques facilitate the algorithm to obtain more robust and accurate clustering performance, including choosing $\\tau$ as median Euclidean distance between any pair of rows that are k-nearest neighbors of each other and normalizing $\\gamma_{i_1, i_2}^1, \\gamma_{i_1, i_2}^2, \\gamma_{i_1, i_2}^3$; see lines 152-156. We also considered other hyperparameter tuning methods such as cross-validation and stability selection. Since both methods are based on resampling, they are unattractive in the tensor setting due to the computational burden compared to extended Bayesian Information Criterion. In terms of robustness, we investigated this issue for the hyperparameter lambda but less for gamma because of the huge computational burden. We thank the reviewer for raising this problem; choosing optimal hyperparameters could be a productive direction for future work!\n\nIn addition, we also thank the reviewer for pointing out a couple of typos. We apologize for missing them in our proofreading; they will be corrected in the revised manuscript.\n",
" This work proposes an algorithm for simultaneous tensor decomposition and clustering using the alternating least squares approach. The proposed method is accompanied by a complexity and convergence analysis. Experimental studies on both synthetic and real-world datasets demonstrate the effectiveness of the proposed method in clustering multiway array data. Strengths:\n1. Paper is well-written and easy to follow.\n2. The proposed method is well motivated.\n3. The proposed method is presented as a concise pseudocode and it is easy to follow.\n4. Complexity analysis shows the proposed method's cost is similar to that of ordinary ALS, showing that it is practically applicable.\n5. Section 3.2 on how to choose regularization weights shows that these vital hyperparameters are chosen systematically from the data and not heuristically.\n\nWeakness:\n1. Missing detailed intuition behind the reason for choosing a CP model as opposed to a Tucker model for example.\n 1. Why is a CP model chosen over other tensor decomposition models? Please elaborate.\n2. Is the CP assumption relevant on real-world datasets?\n3. The paper claims that employing orthogonalization in ALS avoids local minima and speeds up convergence. Although empirical evidence of this general statement it provided in previous works, does this hold in this setting where ALS is employed for clustering? Please provide intuition.\n\n The paper discusses a couple of limitations of this work in section 6 and aims to address them in future work. ",
" This paper tackles the tensor multi-mode clustering problem. The authors model the tensor to cluster as the sum of two terms, the signal which admits a rank-K CANDECOMP/PARAFAC (CP) decomposition, and some noise. They establish a link between the CP decomposition factors and the means of the clusters from the corresponding modes to propose the core optimization problem described in the paper. They aim to approximate as closely as possible the tensor by a rank-K CP decomposition, with a generalized Lasso penalty that limits the number of unique rows in the CP decomposition factors. An Alternating Least Squares (ALS) strategy is used to solve the optimization problem one factor at a time. An orthogonalization step is added before each ALS step, and a Fuse operator is applied after the ALS step.\nThe algorithm is presented with convergence guarantees, and the proposed approach is compared with other tensor mode clustering methods on both simulated and real datasets. The authors also explain how to choose every parameter of the proposed algorithm. ### Strengths\n1. The paper is clear and well written. The progression is logical and the choices made are well justified.\n\n2. The proposed approach is compared favorably with concurrent methods on simulated and real datasets.\n\n3. Theoretical convergence properties of the given algorithm are derived.\n\n### Weaknesses\n4. I have concerns about the proof of Theorem 1. See the section **Questions** below.\n\n5. No code is given to reproduce the results.\n\n**Update after rebuttal**: the authors have provided code (although I did not have time to go through it) and addressed my concerns regarding the proof of Theorem 1. I think that the clarity of the proof can still be improved, but the authors are working on it, and I have good hope that they will manage to do so in the given amount of time. I start with an essential question determining my acceptance of the paper.\nPlease correct me if I am wrong, but it seems that the proof of Theorem 1 bites its tail. Indeed, Theorem 1 gives an upper bound on the distance between the true and the estimated factors. However, its proof is based on Lemma 3, which assumes this same upper bound. Furthermore, it is written in Lemma 3 that this assumption can be made \"without loss of generality\". I may be missing something here, so please enlighten me.\n\nI will now ask some questions, give suggestions to improve readability and point out mistakes or typos.\n- Could it be possible to assess the validity of the A1-A4 assumptions on a given dataset? Were they all valid on the simulated datasets?\n- General remark: it would help the reader if different indices were used for the rows and the columns. For instance, use $k$ for columns and $i$ for rows: $\\mathbf A_{:k}$ and $\\mathbf A_{i:}$.\n- Line 71: the definition of the spectral norm should be $ \\Vert \\mathcal Y \\Vert = \\text{max}_{\\Vert \\mathbf u_1 \\Vert_2 = \\Vert \\mathbf u_2 \\Vert_2 = \\Vert \\mathbf u_3 \\Vert_2 = 1} \\mathcal Y(\\mathbf u_1, \\mathbf u_2, \\mathbf u_3)$.\n- Line 79: \"rank-K CP decomposition\" and no closing parenthesis at the end of the line.\n- Line 80: \"unit ~~one~~ norm\".\n- Line 121: it should be [...] $+ \\lambda \\Vert \\boldsymbol \\Delta \\mathbf u \\Vert_1$. $\\boldsymbol \\Delta$ is also implicitly defined as specific matrices per mode were previously introduced.\n- Line 169: $\\gamma$ is used for the signal ratio, there is no possible confusion with the regularization parameters, but maybe another symbol should be preferred.\n- Line 170: in A4, constant $M$ has not been introduced before.\n- Line 172: \"on **the** rank $K$\".\n- Line 191: observation made that large $\\gamma$ results in lower error bounds seems wrong according to the bound given in Theorem 1.\n- Line 200: what does the condition on $w_\\text{min}$ means? Is it a high signal-to-noise ratio condition? Was it verified in the simulations?\n- Lines 256-257: the motivation for adding orthogonality was, according to line 119, to get more rapid convergence, but here, a concern is addressed that this may increase the number of iterations. Talking about verification in lines 256-257 would be more logical.\n\n- Appendix line 65, on the second line of the equation defining $\\Delta$ (maybe another symbol would be a good choice not to get confused with $\\boldsymbol \\Delta$): why does this become an upper bound with $\\mathbf A_{: i}$ replaced with $(\\mathbf A_{: i} + \\hat \\xi_i)$?\n- Appendix line 68 (and after): $II_{11}'$ is used but never introduced. It happens after with different quantities.\n- Appendix line 71, on the first line of the equation: $\\mathcal Y^*$ should be replaced with $w_l$.\n- Appendix line 73, on the first of the equation: $w_l$ is missing.\n- Appendix line 79: indices are wrong. It should be $j_1$ for the $\\mathbf A$ terms and $j_2$ for the $\\mathbf B$ terms. I get the idea of the majorization, but it is not well-written: the sums have been removed, but there are still terms in $j_1$ (and normally $j_2$) appearing in the upper bound.\n- Appendix line 83, equation 11: the term $2 w_\\text{max} \\rho$ supposes that $j_1 \\neq j_2$, but we could have the equality.\n- Appendix line 84, the whole bound should be multiplied by $((K - 1) \\Delta)^2$ and the last term is $(K - 2) \\rho^2 w_\\text{max}$.\n- Appendix line 146: $\\Vert . \\Vert_2$.\n- Appendix line 149 (and after): $\\alpha^2$ turned into $\\alpha$.\n- Appendix line 155: I do not get how the upper bound is simplified.\n- Appendix line 156: I do not get the bounds on $| 1 / \\kappa |$, and I do not get how the upper bound is simplified. The authors identified that their convergence guarantees were valid for the real rank $K$ and that more work is needed to derive results for an estimated rank $K$. The convergence results show that if the number of samples goes to infinity and the tensor is noiseless, the true factors will be retrieved. However, it does not say anything about the algorithm reaching a stationary point. Is it possible to show, for example, that the objective function decreases monotonously?",
" This paper proposes a new formulation and a new ALS based algorithm for tensor clustering. Upper bound of the theoretical convergence rate of this algorithm is provided, and experimental results show that this algorithm yields more accurate clustering for multiple cases compared to existing methods. Strengths:\n\n- The problem of tensor clustering is important\n- Detailed theoretical analysis of the algorithm, and better experimental results compared to existing work.\n\nWeaknesses:\n\n- As is declared in the paper: \"First, the orthogonalization step is performed before each iteration of ALS, which allows for the avoidance of local optima and more rapid convergence to the true factors\". There is no detailed analysis in the paper to show the advantage of this orthogonalization step. Additional experiments that compare the algorithm with orthogonalization and the one without orthogonalization would be helpful.\n- The formulation (1) includes multiple hyperparameters. The choice of these hyperparameters in the paper is presented in 3.2 However, questions still remain as to 1) is the formulation robust to these hyperparameters, and 2) is there other reasonable ways to choose these hyperparameters, and discussions over them would be helpful.\n\n-------\nAfter rebuttal: I thank authors for the detailed response. My concerns are addressed. \nLine 70: spectrum norm formulation is wrong, I guess the authors mean max rather than min.\n\nLine 79: There is one redundant ) at the end of this line. There are no negative social impacts of this work."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"e_sOn_wSflx",
"7-xe_M4_DkV",
"bI7XrHAU9Iz",
"rPDxU5i67DES",
"lYaxkDJD_Rxp",
"W2xFDUDWsok",
"nips_2022_y8FN4dHdxOE",
"JaR9eozPdWx",
"JaR9eozPdWx",
"I2VW2HAMJ3K",
"OnYlF6nvpIt",
"nips_2022_y8FN4dHdxOE",
"nips_2022_y8FN4dHdxOE",
"nips_2022_y8FN4dHdxOE"
] |
nips_2022_GjWDguPZRmr | Improving Variational Autoencoders with Density Gap-based Regularization | Variational autoencoders (VAEs) are one of the most powerful unsupervised learning frameworks in NLP for latent representation learning and latent-directed generation. The classic optimization goal of VAEs is to maximize the Evidence Lower Bound (ELBo), which consists of a conditional likelihood for generation and a negative Kullback-Leibler (KL) divergence for regularization. In practice, optimizing ELBo often leads the posterior distribution of all samples converging to the same degenerated local optimum, namely posterior collapse or KL vanishing. There are effective ways proposed to prevent posterior collapse in VAEs, but we observe that they in essence make trade-offs between posterior collapse and the hole problem, i.e., the mismatch between the aggregated posterior distribution and the prior distribution. To this end, we introduce new training objectives to tackle both problems through a novel regularization based on the probabilistic density gap between the aggregated posterior distribution and the prior distribution. Through experiments on language modeling, latent space visualization, and interpolation, we show that our proposed method can solve both problems effectively and thus outperforms the existing methods in latent-directed generation. To the best of our knowledge, we are the first to jointly solve the hole problem and posterior collapse. | Accept | The paper addresses the KL collapse of VAE models by proposing a new regularization. Reviewers generally acknowledge the novelty of the work and have the tendency of recommending acceptance. | train | [
"jJ_2Sx3V7uj",
"szl37BZVkRl",
"44mMCswbSm4",
"nOYQlXDKHP",
"jbsntyREtGe",
"GVwwHw6OLs",
"WFhw_INFmyx",
"3So4PCaGdfU",
"Tx1jdhD5yUH",
"Pe6Ro9En2Co",
"FBUX9rf-Ta",
"e6gW1OGY6-"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the encouraging comment! We are glad that you found our reply helpful.",
" Thanks to the authors for their response to my review (and apologies for my late response to the authors). I found the response helpful -- specifically, the authors' comments on runtime have given me a better idea about the practicality of the proposed approach.",
" Thank you very much for increasing the score and for the insightful feedback! We have updated the paper in the revised version to clarify all the points raised, including using the standard VAE notation and adding the citations from your review.",
" I upgrade my grade from 7 to 8. It would be nice to update the paper to clarify these points + add the citations from my review.\n\nAlso, I think the paper would be more clear without these two X and Y RVs, and move to the \"standard\" notation for VAE, as this is sufficient for the paper.",
" Thanks for your comment! We approximated the test set log likelihoods with $16$ importance weighted samples of $\\mathbf{z}$ from both the variational distribution and prior distribution for each datapoint $\\mathbf{x}$ (i.e., $8$ samples from each distribution), which yields robust approximations empirically. Please see also the newly provided Appendix C in the revised version for the detailed sampling process. To give more details, we conducted evaluations for all models across $10$ different random seeds under this setting and reported the mean values of log likelihoods at the precision of $0.1$, where the variances are all less than $0.01$.",
" Thanks to the authors for addressing the points in the review; I am happy to increase my score.\n\nRegarding the number of samples in the evaluations, I did not mean the number of data points but rather how many samples of $\\mathbf{z}$ from the variational distribution are used in order to approximate the test set log likelihoods?",
" Thank you very much for your positive feedback and constructive criticism! In the following we address it point by point.\n\n> **Q:**\n\t\"Throughout, the use of $\\mathbf{x}$ and $\\mathbf{y}$ to refer to instances of the same variable is confusing. Why not just use x throughout as is commonly done in the literature, avoiding the introduction of unnecessary notation?\"\n\n**A:**\n\tWe apologize for the confusion on our notations. We actually followed the notations used in [1], which used two variables $\\mathbf{x}$ and $\\mathbf{y}$ to represent the input and output of VAEs. Indeed, we agree that it will be a much better choice to use $\\mathbf{x}$ only, which follows the standard formulation of VAEs. We have corrected the notations in the revised version of our paper.\n\n> **Q:**\n\t\"Although it is clear why we would expect the proposed method to solve the hole problem, it is not explained why we would expect it to solve posterior collapse? Surely, as with the vanilla VAE, there is a local optimum where $q_{\\phi}(\\mathbf{z}|\\mathbf{x})=p(\\mathbf{z})\\forall\\mathbf{x}$?\"\n\n**A:**\n\tThanks for your question. The proposed objective seeks to optimize both the log-likelihood of data and the sum of marginal mutual information between the latent variable and the data. With such an objective design, the posterior collapse (or KL vanishing) can be solved effectively as the mutual information sub-objective is a lower bound of the KL divergence term in ELBo according to Hoffman et al.'s formulation.\n\n> **Q:**\n\t\"When reading the paper from beginning to end in order, it is unclear what the point is of Equation (3) and the paragraph just preceding it. I believe the clarity would be improved were this part moved to the part before Equation (8).\"\n\n**A:**\n\tThanks for your valuable suggestion. We have made changes according to your suggestion in the revised version.\n\n> **Q:**\n\t\"In Figure 1, should the $p_{\\phi}$ terms not be $q_{\\phi}$?\"\n\n**A:**\n\tThanks for pointing out the typo, which has been fixed in our revised version.\n\n> **Q:**\n\t\"Are the results in Table 2 computed on the test set?\"\n\n**A:**\n\tYes, we computed the results in Table 2 on the test set following prior work [2].\n\n\n> **Q:**\n\t\"How many samples are used for the evaluation?\"\n\n**A:**\n\tWe used the full test set for evaluation. The numbers of samples of different datasets are given in Table 1.\n\n\n> **Q:**\n\t\"Why not report the (importance weighted) ELBO taken with a large number of samples, as is commonly done in the literature?\"\n\n**A:**\n\tWe computed the prior Log-Likelihood $priorLL(\\theta)$, which shares the same information with the Negative Log-Likelihood (NLL) estimated by (importance weighted) ELBO.\n\n\n> **Q:**\n\t\"The examples shown in Figures 6 and 7 are extremely short sentences, and at least qualitatively it is not clear that the DG-VAE is better than the $\\beta$-VAE\"\n\n**A:**\n\tThanks for your valuable suggestion. We have added cases of long sentences on each dataset in our revised version, and highlighted tokens of the longest common subsequences for clear comparisons. \n\n\n> **Q:**\n\t\"Why only show examples of the $\\beta$-VAE and not the other baselines?\"\n\n**A:**\n\tWe only compared our method with $\\beta$-VAE(0.1) in the case study as it is the best overall performing baseline model according to the automatic metrics. We will add additional results from other models in the revised version to address your comment.\n\n\n> **Q:**\n\t\"Is your method the first which intends to jointly solve the hole problem and posterior collapse?\" & \"If so, this would be a valuable statement to add.\"\n\n**A:**\n\tThanks for your valuable suggestion. To the best of our knowledge, we are indeed the first to jointly solve the hole problem and posterior collapse. We have added this statement in our revised version.\n\n\n> **Q:**\n\t\"Arguably, the most important limitation of the proposed method is that the training objective is no longer a lower bound on the true log-likelihood of the data. Does this mean that the model is no longer suitable for tasks such as density estimation, out-of-distribution detection, etc. which the vanilla VAE is otherwise useful for?\"\n\n**A:**\n\tThanks for your suggestions, we don't have a definite answer to the question but we do agree that these are very intersting directions for exploration.\n\n\n[1] Yu W, Wu L, Zeng Q, et al. Crossing Variational Autoencoders for Answer Retrieval[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 5635-5641.\n\n[2] Zhu Q, Bi W, Liu X, et al. A Batch Normalized Inference Network Keeps the KL Vanishing Away[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 2636-2649.",
" Thank you for the detailed and insightful review. Below, we address your points individually.\n\n> **Q:**\n\t\"The interpolation study shows is interesting and the proposed approach seems to be better than the baselines on the chosen metric for interpolation. However, this metric is not directly related to the mutual information between latent variables and text or the diversity one can expect from the samples of the generative model.\"\n\n**A:**\n\tThanks for your comment. Empirically, we do observe that the metric for interpolation (i.e., Rouge-L F1-score) has a strong correlation with the mutual information between latent variables and text. This is intuitive as the generated sentences tend to be irrelevant to the ground truth when the mutual information is poor, hence leading to a low Rouge-L F1-score (and vice versa).\n\n> **Q:**\n\t\"The only metric BN-VAE does slightly worse is CU, and I am unsure how to interpret this unconventional metric.\"\n\n**A:**\n\tWe propose CU to quantify the severity of the hole problem, by measuring the degree of matching between the aggregated posterior distribution and the prior distribution in a dimension-wise perspective. A lower value of CU indicates a larger mismatch between these two distributions, and hence a severer hole issue.\n\n> **Q:**\n\t\"The proposed approach also is more expensive due to the quantities involved with the aggregated posterior. A comparison of runtime with respect to other approaches would be helpful.\" & \"Please address the runtime of the proposed approach.\"\n\n**A:**\n\tThanks for your insightful comment! Indeed, our approach is a bit more expensive than the baseline models due to the density gap-based regularization. However, this additional computational cost is very affordable. For instance, we compared the training time of our model and the vanilla VAE based on the default setting of batch size $|B|=32$, latent dimension $Dim=32$, and the number of samplings $M=32$ for Monte Carlo approximation in Eq. 9 and Eq. 11. The averaged training time of our model (over all experimental datasets) is only $11\\\\%$ higher than that of the vanilla VAE. We will include run time anlaysis results of our model and the baselines in the revised version of our paper.\n\n> **Q:**\n\t\"some notes on the presentation: Line 53: q(z|x) instead of y, rather x and y are confusing, its the same thing Indicator operator used for MI Line 82 Text in math is poorly formatted\"\n\n**A:**\n\tWe apologize for the confusion on our notations. We actually followed the notations used in [1], which used two variables $\\mathbf{x}$ and $\\mathbf{y}$ to represent the input and output of VAEs. Indeed, we agree that it will be a much better choice to use $\\mathbf{x}$ only, which follows the standard formulation of VAEs. We have corrected the notations in the revised version of our paper.\n\n> **Q:**\n\t\"Some related work: VAE with a VampPrior, Tomczak et al. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation, Chen et al.\"\n\n**A:**\n\tThanks for your valuable sharing. We have added these related work to appropriate places in our revised version.\n\n\n[1] Yu W, Wu L, Zeng Q, et al. Crossing Variational Autoencoders for Answer Retrieval[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 5635-5641.",
" Thanks a lot for your positive review and valuable feedback. Here are our answers for your questions:\n\n> **Q:**\n\t\"not sure to understand whether the proposed objective is still a lower bound to the log-likelihood\" & \"is the proposed objective still a lower bound to the log-likelihood?\"\n\n**A:**\n\tThanks for your question. The proposed objective is no longer a lower bound as it seeks to optimize both the log-likelihood of data and the sum of marginal mutual information between the latent variable and the data. With such an objective design, the posterior collapse (or KL vanishing) can be solved effectively as the mutual information sub-objective is a lower bound of the KL divergence term in ELBo according to Hoffman et al.'s formulation.\n\n\n> **Q:**\n\t\"weird notation, e.g. the use of X and Y variables, it should be X only, I don't understand why to introduce a second RV for the same observation.\"\n\n**A:**\n\tWe apologize for the confusion on our notations. We actually followed the notations used in [1], which used two variables $\\mathbf{x}$ and $\\mathbf{y}$ to represent the input and output of VAEs. Indeed, we agree that it will be a much better choice to use $\\mathbf{x}$ only, which follows the standard formulation of VAEs. We have corrected the notations in the revised version of our paper.\n\n\n> **Q:**\n\t\"Also, q(n) is the same as p(n), no? Why a proposal distribution here? It is fixed(?)\"\n\n**A:**\n\tThanks for pointing out the typo. We have fixed this typo by changing $p(\\mathbf{x}=x_n)$ to $q_{\\phi}(n)$ to in the revised version of our paper. As stated in line 64, $n$ is the index (or identity) of datapoints whose posterior distributions compose the aggregated posterior distribution, so it is fixed to the discrete uniform distribution $q_{\\phi}(n) \\equiv \\frac{1}{N}$.\n\n\n> **Q:**\n\t\"l. 123: I don't understand what is the point of stating that \"we only consider z \\in ...\" => q(z) is Gaussian, so we can't have q(z) = 0 anyway, no?\"\n\n**A:**\n\tBesides the commonly used Gaussian distribution, we also consider von Mises-Fisher (vMF) distribution and propose corresponding variants, e.g., DG-vMF-VAEs. In theory, we can have $q_{\\phi}(z)=0$ in DG-vMF-VAEs, but the domain of $DG(\\theta,\\phi;z)$ is $\\\\{z|q_{\\phi}(z)>0\\\\}$, as we only need to compute this term for samples from $q_{\\phi}(\\mathbf{z})$, as stated in Equation 7. That is why we stated that we only consider $z \\in \\\\{z|q_{\\phi}(z)>0\\\\}$.\n\t\n\n> **Q:**\n\t\"in the end, the objective is only a combination of ELBo + mutual information? Could you explain a little bit why is this novel? Especially, what is the difference with [1]?\"\n\n**A:**\n\tOur proposed model seeks to optimize both the log-likelihood of data and the sum of marginal mutual information between the latent variable and the data. The key novelties of our models are twofold: (1) in contrast to existing models such as Adversarial Autoencoder [2] whose regularizer is merely based on sampling sets, and thus is sub-optimal, our model innovatively takes the perspective of PDFs, as we discuss in line 111, which is proved to form a continuous latent space that matches the prior much better, as illustrated in Appendix A; (2) for Gaussian settings, we present a regularizer for modelling more aggressive mutual information between the latent variable and data by imposing regularisation over marginal distributions over each dimension of the latent variable, as we introduce in line 159. This intends to make full use of the latent dimensions instead of only activating part of them.\n\nAccording to [3], our training objective differs from that of Asymmetric MIM mainly on the following two points: (1) MIM introduces parameterized approximate priors on data and latent variables to avoid the need for unstable adversarial training and the estimation of mutual information, while our method only adopts the anchor distributions (following vanilla VAE) and replaces adversarial training (as Adversarial Autoencoder [2] does) with our Density-Gap based regularizer; (2) Asymmetric MIM maximizes the mutual information between the data distribution and the latent variable distribution, which is similar to our Equation 10, but we further extend this to the sum of mutual information between the data distribution and the marginal distributions over each dimension of the latent variable (as illustrated in our Equation 13), so as to capture richer mutual information.\n\n\n> **Q:**\n\t\"The following citations are missing and they provide a broader view on how researchers tackle the problem in the NLP community:\"\n\n**A:**\n\tThanks for your valuable sharing. We have added these related work to appropriate places in our revised version.\n\n\n[1] Yu W, Wu L, Zeng Q, et al. Crossing Variational Autoencoders for Answer Retrieval[C]//ACL. 2020: 5635-5641.\n\n[2] Makhzani A, Shlens J, Jaitly N, et al. Adversarial autoencoders[J], 2015.\n\n[3] Livne M, Swersky K, Fleet D J. MIM: Mutual Information Machine[J], 2019.",
" This paper addresses posterior collapse in VAE and also tries to mitigate the issue with many existing solutions for this which is the trade off with poor fit to the prior.\n\nThe authors propose a novel regularization to substitute the KL regularization in ELBo for VAEs, which is based on the density gap between the aggregated posterior and the prior. Since quantities related to aggregated posterior need for the regularizer depend on the whole dataset are expensive to compute, this paper further changes the objective to consider aggregation over minibatches only. \n\nThis regularizer maximizes the ELBO as well as the mutual information between the input and the latent variable.\n\nFor Gaussian settings, the authors present a regularizer for more aggressive mutual information between the latent variable and data by imposing a regularizer over marginal distributions over each dimension of the latent variable.\n\nThe empirical comparison is done with relevant baselines on text datasets.\n The paper is sound and the baselines are well chosen covering a range of related work.\n\nThe technical contribution hinges very heavily on the paper by Hoffman et al.'s formulation, but I believe the proposed regularizer is novel.\n\nThe interpolation study shows is interesting and the proposed approach seems to be better than the baselines on the chosen metric for interpolation. However, this metric is not directly related to mutual information between latent variables and text or the diversity one can expect from the samples of the generative model.\n\nMy main concern is that the main empirical results show that baseline/competing models are as good/better than the proposed method -- especially BN-VAE. The only metric BN-VAE does slightly worse is CU, and I am unsure how to interpret this unconventional metric.\n\nThe proposed approach also is more expensive due to the quantities involved with the aggregated posterior. A comparison of runtime with respect to other approaches would be helpful.\n\nSome related work:\nVAE with a VampPrior, Tomczak et al.\nLearning to Explain: An Information-Theoretic Perspective on Model Interpretation, Chen et al. some notes on the presentation:\nLine 53: q(z|x) instead of y, rather x and y are confusing, its the same thing\nIndicator operator used for MI\nLine 82\nText in math is poorly formatted\n Please address the runtime of the proposed approach.",
" This paper focuses on a well-known problem in variational auto-encoders: the learned latent representation often has \"gaps\" in its prior, i.e. there are latent samples with high prior PDF values that fail to generate coherent outputs.\n\nThe authors analyse the problem and propose a novel objective to fix the issue that combines the ELBO with a mutual information term. Importantly, the proposed surrogate objective is well-motivated. **Strengths**\n\n- important problem in deep generative modeling\n- contribution is interesting and well-motivated\n- good experimental section\n\n**Weaknesses**\n\n- not sure to understand whether the proposed objective is still a lower bound to the log-likelihood\n- weird notation, e.g. the use of X and Y variables, it should be X only, I don't understand why to introduce a second RV for the same observation. Also, q(n) is the same as p(n), no? Why a proposal distribution here? It is fixed(?)\n\n**Missing citations**\n\nThe following citations are missing and they provide a broader view on how researchers tackle the problem in the NLP community:\n- SentenceMIM: A Latent Variable Language Model (Micha Livne, Kevin Swersky, David J. Fleet)\n- A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text (Bohan Li, Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick, Yiming Yang)\n- Preventing posterior collapse in variational autoencoders for text generation via decoder regularization (Alban Petit, Caio Corro)\n- Preventing Posterior Collapse with Levenshtein Variational Autoencoder (Serhii Havrylov, Ivan Titov) - is the proposed objective still a lower bound to the log-likelihood?\n- l. 123: I don't understand what is the point of stating that \"we only consider z \\in ...\" => q(z) is Gaussian, so we can't have q(z) = 0 anyway, no?\n- in the end, the objective is only a combination of ELBo + mutual information? Could you explain a little bit why is this novel? Especially, what is the difference with [1]?\n\n[1] MIM: Mutual Information Machine (Micha Livne et al.) nothing to report\n",
" This paper proposes a unified solution to two problems which can occur when training VAEs - the hole problem (where the aggregated variational distribution fails to fit the prior) and posterior collapse (where the variational distribution becomes the same for every data point and is therefore uninformative).\n\nThe proposed solution is a modification to the VAE's training objective, where the per-data-point KL divergence term is replaced by the KL divergence from the aggregated posterior to the prior. This aggregate KL divergence term is expressed as the sum of KL divergences over the dimensions of the latent variable.\n\nThe authors compare their method to various baselines, all of which were designed to solve the posterior collapse issue. The proposed method appears to address the posterior collapse problem, while suffering the hole problem to a lesser extent than the baselines. Strengths:\n- This is a conceptually simple method which appears to be effective at solving both the hole problem and posterior collapse when training VAEs.\n- The authors do a good job at explaining the hole problem and posterior collapse, as well as the apparent trade-off between the two.\n - Figure 1 is particularly clear for this.\n- The empirical results appear to outperform the baselines, in terms of having all of the latent dimensions being both 'active' and 'consistent' (as defined in Appendix C), as well as the latent representations and observations having a high amount of mutual information.\n - The latent space visualisations appear to indicate that the proposed method clearly outperforms the baselines at solving both the hole problem and posterior collapse.\n - In addition, the proposed method appears to consistently outperform baselines at generating interpolations between sentences.\n- Although this may not be the most significant advance in generative modelling by itself, it does seem to be a piece of work that the community could easily build on in order to make incremental advances in the field.\n\n\nWeaknesses:\n- Although the authors explain the hole problem and posterior collapse well, the explanation of their actual method difficult is to follow. Some examples include:\n - Throughout, the use of $\\mathbf{x}$ and $\\mathbf{y}$ to refer to instances of the same variable is confusing. Why not just use $\\mathbf{x}$ throughout as is commonly done in the literature, avoiding the introduction of unnecessary notation?\n - Although it is clear why we would expect the proposed method to solve the hole problem, it is not explained why we would expect it to solve posterior collapse? Surely, as with the vanilla VAE, there is a local optimum where $q_{\\phi}(\\mathbf{z}|\\mathbf{x}) = p(\\mathbf{z}) \\forall \\mathbf{x}$? \n - When reading the paper from beginning to end in order, it is unclear what the point is of Equation (3) and the paragraph just preceding it. I believe the clarity would be improved were this part moved to the part before Equation (8).\n - In Figure 1, should the $p_{\\phi}$ terms not be $q_{\\phi}$?\n- The evaluation in the experiments section is not entirely convincing.\n - There are several missing details, e.g.\n - Are the results in Table 2 computed on the test set?\n - How many samples are used for the evaluation?\n - Why not report the (importance weighted) ELBO taken with a large number of samples, as is commonly done in the literature?\n - The interpolation task used to measure the ability of the model to do latent-guided generation isn't totally convincing.\n - The examples shown in Figures 6 and 7 are extremely short sentences, and at least qualitatively it is not clear that the DG-VAE is better than the $\\beta$-VAE.\n - Why only show examples of the $\\beta$-VAE and not the other baselines?\n\nUPDATE: Increased score post rebuttal. - Is your method the first which intends to jointly solve the hole problem and posterior collapse?\n - If so, this would be a valuable statement to add.\n\nOther questions included in the strengths and weaknesses section. - The authors do not discuss the limitations of their method.\n - Arguably, the most important limitation of the proposed method is that the training objective is no longer a lower bound on the true log likelihood of the data. Does this mean that the model is no longer suitable for tasks such as density estimation, out-of-distribution detection, etc. which the vanilla VAE is otherwise useful for?"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"szl37BZVkRl",
"3So4PCaGdfU",
"nOYQlXDKHP",
"Tx1jdhD5yUH",
"GVwwHw6OLs",
"WFhw_INFmyx",
"e6gW1OGY6-",
"Pe6Ro9En2Co",
"FBUX9rf-Ta",
"nips_2022_GjWDguPZRmr",
"nips_2022_GjWDguPZRmr",
"nips_2022_GjWDguPZRmr"
] |
nips_2022_DSEP9rCvZln | Inherently Explainable Reinforcement Learning in Natural Language | We focus on the task of creating a reinforcement learning agent that is inherently explainable---with the ability to produce immediate local explanations by thinking out loud while performing a task and analyzing entire trajectories post-hoc to produce temporally extended explanations. This Hierarchically Explainable Reinforcement Learning agent (HEX-RL), operates in Interactive Fictions, text-based game environments in which an agent perceives and acts upon the world using textual natural language. These games are usually structured as puzzles or quests with long-term dependencies in which an agent must complete a sequence of actions to succeed---providing ideal environments in which to test an agent's ability to explain its actions. Our agent is designed to treat explainability as a first-class citizen, using an extracted symbolic knowledge graph-based state representation coupled with a Hierarchical Graph Attention mechanism that points to the facts in the internal graph representation that most influenced the choice of actions. Experiments show that this agent provides significantly improved explanations over strong baselines, as rated by human participants generally unfamiliar with the environment, while also matching state-of-the-art task performance. | Accept | The paper proposes a hierarchical approach to explainable RL which combines different modules, including a knowledge graph, to generate natural language explanations.
There has been a debate between the reviewers about this approach being novel or not which was the main concern left after the rebuttal phase. Other concerns were indeed fairly well addressed by the authors.
Although each module in the proposed approach is not novel, it seems that the way they are used to address the specific problem of explainability and especially in text games is novel and sound. The results are convincing and the evaluation against a large number of baselines is the result of a large amount of work and a solid scientific method.
For these reasons, acceptance is recommended. | train | [
"92v42qZWl20",
"H1NX7NE-tMl",
"3fHKHZnMJ4I",
"gF3Q_JaiKPD",
"PUc7WCN7hZD",
"HH1PeBEQGZJ",
"Fgm2Z4-0I6",
"yC4IL5u9urM",
"PhV7sC0lFB",
"dIqUIDj5-x",
"6K5l-Pi7ejF",
"xnyJuKV5UI",
"Zcw2eb0S_Cn"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for responding to our changes and for the time you have spend in writing a thorough review with points for improvement. We are encouraged to hear that you think the paper is ready for acceptance and hope that a fruitful discussion continues into the post-rebuttal period with the other reviewers.",
" My apologize, my previous post was not correctly posted. Thank you for the kind reminder.\ni acknowledge the authors improvements over the paper, the small add-ons there and there maks the paper stronger, and ready for admission from my perspective. I thus increased my score. However, I also understand the other reviewer comments, and we will likely continue the discussion after the rebutal.",
" This is a gentle reminder that the discussion period closes soon. We have attempted to address all concerns raised, summarized the rationale for doing so and modified the paper itself. We would be greatly encouraged if the reviewers either raised their scores or engaged in further discussion with us. Thanks a lot.",
" This is a gentle reminder that the discussion period closes soon. We have attempted to address all concerns raised, summarized the rationale for doing so, and modified the paper itself. We would be greatly encouraged if the reviewers either raised their scores or engaged in further discussion with us. Thanks a lot.",
" This is a gentle reminder that the discussion period closes soon. We have attempted to address all concerns raised, summarized the rationale for doing so, and modified the paper itself. We would be greatly encouraged if the reviewers either raised their scores or engaged in further discussion with us. Thanks a lot.",
" This is a gentle reminder that the discussion period closes soon. We have attempted to address all concerns raised, summarized the rationale for doing so and modified the paper itself. We would be greatly encouraged if the reviewers either raised their scores or engaged in further discussion with us. Thanks a lot.",
" We would first like to thank the reviewer for their thoughtful comments and time. We will clarify claims below.\n\n1. _''The novelty is a bit low. The whole ideas used here for immediate explanation and trajectory level explanation are not totally new and novelty is limited. In terms of immediate explanation, the idea of using Hierarchical Graph Attention for explainability is not new and a similar approach has been proposed in [Xu et al.]. The same applies In terms of providing a trajectory-level explanation. ''_\n\n- To clarify the novelty of our approach - we are the first to use knowledge graph attention-based attribution to explain actions in such grounded environments.\n\n Even though Qbert (Ammanabrolu et al.) and SHA-KG (Xu et al.) are both knowledge graph-based agents, these architectures do now allow for as fine-grained attention-based attribution as our architecture does---e.g. Q*BERT does not use relationship information in their policy and SHA-KG averages attention across large portions of the graph and is unable to point to specific triples in its KG representation to explain an action. \n\n2. _''Authors provide an ablation study in Appendix C which was useful experiment, but I am not fully convinced if those three filters are all necessary. I think it would have been interesting to see the performance of just a larger language model (i.e., a larger CALM) for filtering or a fine tuned version to further simplify the design of filtering.''_\n\n- In terms of temporal explanations, the reason why we did not use a *single* general model to produce temporally-extended explanations in this paper is that different filters serve different functions.\n\n - Bayesian filter helps to form the most likely chain of causal dependencies that lead to a state. It only considers causal dependencies in this specific game without any semantic relationship between steps.\n - CALM filters down the set of important states by finding the states that have corresponding actions that a human player would be more likely to perform. Since CALM is trained on human transcripts, it considers causal dependencies from the human perspective.\n - The semantic State-Action Filter takes semantic relationships into consideration.\n\n These three filters serve different requirements of temporally-extended explanations. One can use any combination of them for serving different functions. For example, we can produce temporally-extended explanations about causal with the Baysian filter only.\n \n To the best of our knowledge, each of these filters themselves has never been used for the purposes of reducing an agent's trajectory to provide more useful temporal explanations. We contend that all steps are individually novel as well as their combination.\n\n3. _''I am not fully convinced of this claim in the introduction-- \"study against strong baselines that shows that our agent generates significantly improved explanations\"; would have been nice to see other closely related baseline for Immediate explanation Evaluation such as [Xu et al.]''_\n\n- Our paper is the first to use knowledge graph attention-based attribution to explain actions in such grounded environments, hence there immediate prior works to compare to. We have adapted prior works and devised strong baselines here to compare to.\n\n As stated before, SHA-KG [Xu et al.] has no component to provide human-readable explanations. They average attention across large portions of the graph and so are unable to point to specific triples in its KG representation to explain an action. Simply put, their agent can show whether a relatively large subgraph was relevant for the current action but cannot provide a readable explanation for it. They consequently do not validate or test the utility of their explanations via participant study either.\n\n4. _''What is the full action space? Some discussion about it in main paper would be helpful, Also how this method can be applied in settings with large action space.''_\n\n- Thanks for pointing this out.\n\n This is an established action space used by the literature for the Jericho benchmark:\n\n ''Jericho provides the capability to extract game-specific vocabulary and action templates. These templates contain up to two blanks, so a typical game with 200 templates and a 700 word vocabulary yields an action space of O(TV^2 ) ≈ 98 million, three orders of magnitude smaller than the 240-billion space of 4-word actions using vocabulary alone.''\n \n We have relegated this discussion to Appendix A.1 in the interest of space.\n\n",
" We thank the reviewer for their time and effort and are encouraged by their analysis of this work's significance. We will attempt to address a couple of the weaknesses pointed out.\n\n1. _''The related work is a bit short, and I would recommend adding different research directions in text games that have been pursued.''_\n\n- Thanks for your suggestion. We have revised our work and added as many additional citations as possible given space considerations. \n\n2. _''I would recommend adding some module names in Figure 2 in addition to the network names.''_\n\n- Thanks for this suggestion. We added more module names in Figure 2 to make it easier to follow in revision.\n\n3. _''It exists a discussion in Appendix C (the impact of the filtering), which is never mentioned in the paper.''_\n\n- Thanks for pointing this out. We added the summary to **Line 256**.\n\n4. _''The impact of some hyperparameters (e.g., l193, trajectories) are also not discussed while they may be of paramount importance.''_\n\n- Thanks for pointing this out. Our initial experiments suggested that the larger the number of trajectories we use (l193), the more accurate the Bayesian State Filter. We will revise our appendix to include a discussion regarding our hyperparameter choices. We further note that we report these parameters in **Appendix A.4**.\n\n5. _''...unfair to compare an unconstraint LSTM model with a grammar-based model while assessing for clarity.''_\n\n- In order to have a fair comparison with the LSTM model, we extract the most important substring in the observations through LSTM attention and then use substrings of the human-written observation which contain those words to create an explanation akin to a slot filler. The LSTM model is therefore not unconstrained and we believe it aligns with the reviewer’s suggestion.\n\n We will also note that many human participants state this LSTM explanation is more natural sounding than our immediate explanation from KG with a grammar-based model.\n\n6. _''... the model limitations are never discussed, and the analysis remains purely quantitive without any hints for future direction.''_\n\n- Thanks for this great suggestion. As mentioned in the old paper, as our system relies on graph hierarchical graph attention to generate immediate explanations, we are limited to providing explanations on systems affected by natural language. \n\n We added more limitations according to your advice to the *Evaluation* section (4.2 and 4.3) of the current revision.\n \n As our system relies on graph hierarchical graph attention to generating immediate explanations, a well-trained knowledge graph representation module of the world knowledge is required. Hence, our model is limited to providing explanations on systems affected by natural language.\n\n7. _''...how satisfactory are the explanations?...I would tend to think that the current definition may be too light to apprehend the model performance correctly. ''_\n\n- We asked human participants whether or not the agent's explanation was good enough to enable them to play the game with equivalent skill and asked the participant to condense this judgment down into a goal score. We believe this phrasing serves the same purpose as asking them to reconstruct the sequence of actions as the reviewer has suggested in addition to their satisfaction with the explanation.\n\n8. _''... I would also split the analysis into successful/unsuccessful trajectories to better disentangle the soundness and correctness of the explanation.''_\n\n- Thanks for this advice. We added more examples and analysis in **Appendix D.2**.\n\n We conclude that the performance of immediate explanations is limited: \n - to the cases in which explanation is directly linked to the one of the facts we choose to extract from the knowledge graph, such as object/location information; and \n - by the relative error of knowledge graph extraction models themselves.\n\n9. _''Please provide text-game trajectories (positive and negative!).''_\n\n- Thanks for this advice. We added more examples and analysis in **Appendix D.2 and D.3**.\n\n10. _''Please add a table with all model parameters in the appendix and a training curve. ''_\n\n- We added more model parameters in **Appendix A.2, A.3, and A.9**. The training curve was presented in Appendix A.10.\n\n11. _''The paper does not even provide the discount factor, which is not acceptable from an RL perspective. ''_\n\n- We used the same hyper-parameters with Ammanabrolu et al.. We added more model parameters including the discount factor in **Appendix A.2**.\n\n12. _''...add more content on the game description in the appendix.''_\n\n- We added game descriptions in Appendix D.1. Game statistics has been shown in Table 2.\n\n13. _''Please add the std in Tab2''_\n\n- We added std dev in **Table 2**. The results of other baseline models are from their original paper, where the existing convention in the literature of reporting the raw scores for each game and the normalized averages.\n\n\n\n\n\n\n",
" We thank the reviewer for the time and effort and will make some clarifications below.\n\n1. _''The generation of temporally extended explanations consists of a cascade of different components, either straightforward statistics or prior work. It is impossible to assess the impact and rationale of each component. I encourage the authors to provide an ablation study.''_\n\n- We have shown our ablation study of temporally extended explanations in Appendix C of the previous submission. \n\n To summarize, our findings via human participant study indicate that \n - compared to the Bayesian filter, after applying the CALM model to filter explanation candidates, generated explanations are significantly preferred by human participants;\n - and the full HEX-RL system with Bayes+CALM+Semantic filters provided temporal explanations that human participants felt were more understandable than alternatives.\n\n2. _''Using user study results, I am wondering if there is not a more general solution to assess the importance of relevant game steps to form temporally-extended explanations. For instance, attending overall game steps to estimate P(A|B) and finetuning CALM.''_\n\n- Thanks for your suggestion.\n\n The reason why we did not use a **single** general model to produce temporally-extended explanations in this paper is that different filters serve different functions.\n - Bayesian filter helps to form the most likely chain of causal dependencies that lead to a state. It only considers causal dependencies in this specific game without any semantic relationship between steps.\n - CALM filters down the set of important states by finding the states that have corresponding actions that a human player would be more likely to perform. Since CALM is trained on human transcripts, it considers causal dependencies from the human perspective.\n - The semantic State-Action Filter takes semantic relationships into consideration.\n\n These three filters serve different requirements of temporally-extended explanations. One can use any combination of them for serving different functions. For example, we can produce temporally-extended explanations about causal with the Baysian filter only.\n\n The task of combining these three functions into one model is beyond the scope of this work and we will consider it in future work.\n\n3. _''What would be required to generalize this approach to a related use case, such as QA or Task-oriented Dialogue systems?''_\n\n- We note that the types of QA and task-oriented dialogue systems that a method like ours would be useful for would also need to be grounded in the knowledge graph of their state. I.e. there would need to be a way of extracting the knowledge graph of the world state from the input observations. In our case, this is using a QA model such as seen in Ammanabrolu et al. 2020. Once a knowledge graph of the world knowledge is present, the rest of the architecture we propose can be applied.\n\n We elaborate on some of these requirements in the Limitation section of our previous submission. Our model is limited to providing explanations of systems affected by natural language. We added more requirements for generalizing this approach to a related use case in the *Evaluation* section of the current revision.\n",
" We would like to thank the reviewers for their time spent reading and suggesting improvements to our work. We have made an effort to clarify every point raised by the reviewers in our revised manuscript and have detailed a summary of the changes/clarifications point by point in the text below in addition to providing a more detailed response to each reviewer. We would encourage the reviewers to consider raising their scores.\n\nSummary of changes made by your suggestions:\n\n- A discussion of the limitations of HEX-RL [Asked for by Reviewer Jzdf, SZnA]: \nWe elaborate on some limitations in the *Evaluation* section (4.2 and 4.3). Our model is limited to providing explanations of systems affected by natural language. \nWe added a discussion on how this method can be used for related use cases such as task-oriented dialogue or QA in the *Evaluation* section of the current revision.\nIn Appendix D, we also present a qualitative analysis of successful/unsuccessful explanations with example trajectories in a format similar to that shown to our human participants to give the reader a better sense of the soundness and correctness of generated explanations across baselines. \n\n- Additional related work [Asked for by Reviewer SZnA]: We added more related works to the Related Work section and added more module names in Figure 2 to make it easier to follow in revision. \n\n- Reproducibility details [Asked for by Reviewer SZnA]: We reported more hyperparameters of QA, A2C, and HEX-RL models in Appendix A.2, A.3, A.5, and A.9. Besides, we update performance across agents in Table 2 with standard deviations across 5 random seeds in each game (Training curves with standard deviations can be found in Appendix A.10)\n\n- [Asked for by Reviewer H13U]: We added full action space in Appendix A.1.",
" This paper addresses the problem of generating intermediate natural language explanations for sequential decision making in IF games. The focus is to generate immediate explanations for every step in the game and to generate temporally extended explanations to justify long-term task decisions, which depend on several intermediate steps. The approach is based on a RL Framework which uses a knowledge graph special for IF games as state representation together with several attention mechanism. To generate an immediate explanation, two steps of attention first identify the most relevant subgraph and then the most relevant KG nodes. Temporally extended explanations are generated by a cascade of filtering steps to reduce the number of relevant candidate game steps. The approach is evaluated using user studies. Strengths:\n- The paper provides evidence that temporally-extended explanations deliver value for generating explanations.\n\n\nMain Concern: Generation of temporally-extended explanations\n- The generation of temporally extended explanations consists of a cascade of different components, either straightfoward statistics or prior work. It is impossible to assess the impact and rationale of each component. I encourage the authors to provide an ablation study. Using user study results, I am wondering if there is not a more general solution to assess the importance of relevant game steps to form a temporally-extended explanations. For instance, attending over all game steps to estimate P(A|B) and finetuning CALM. What would be required to generalize this approach to a related usecase, such as QA or Task-oriented Dialogue systems? Yes",
" This paper explores how to leverage graph knowledge within an RL agent to provide an a-posteriori explanation of the agent's actions.\nThe authors first motivate their approach through a well-written introduction clearly stating the research objective.\nThey then detail the underlying neural mechanisms and the different filtering layers to post-process the graph and generate the language explanation.\nFinally, the authors provide two qualitative analyses: a performance benchmark on the agent abilities and a human survey on the quality of the explanation. First of all, I want to felicitate the authors for the quality of the writing and their efforts in explaining the model and research direction. Independently of the paper's content, it was a pleasant and easy read. \nTherefore, the research direction is well-stated. I also appreciated the absence of over-claiming in the paper... except for the title, which incorrectly mentions natural language; while it is templated language. Please remove the word 'natural' from the title. \nThe related work is a bit short, and I would recommend adding a small paragraph on the different research directions in text games that have been pursued, e.g., continual learning [1], action pruning [2], and many others.\nSection 3 gives a good understanding of the method. Although it requires a bit of engineering, they are consistent, and the method remains a promising proof of concept. I would recommend adding some module names in Figure2 (middle) in addition to the network names; otherwise, it is hard to follow. I have a few concerns and questions about the model, which I will detail later.\nIn the experimental section, the authors correctly split the performance and the explainability. The human evaluation seems to have been correctly performed, which is not too common in the ML literature.\nOverall, I got convinced by the approaches, and despite some task-specific design (especially in the filter choice and in the knowledge subgraph), the method seems extendable to other settings.\n\nOverall, I am quite positive about the paper, and it is solid enough to be accepted. However, there are still some critical points I would like to discuss which restrain me from giving a high score.\n\n\n[1] Shuster, Kurt, et al. \"Deploying lifelong open-domain dialogue learning.\" arXiv preprint arXiv:2008.08076 (2020).\n[2] Zahavy, Tom, et al. \"Learn what not to learn: Action elimination with deep reinforcement learning.\" Advances in neural information processing systems 31 (2018). I have four main concerns:\n A) the lack of model ablation\n B) the lack of discussion about the method limitation (which may be linked to ablation)\n C) the lack of qualitative results to give a better intuition of the method\n D) the lack of information for reproducibility\n\nA) The model is based on multiple design choices, yet none of this choice is really discussed. First, both the agent performance and explainability skills are based on the structure of the graph. Therefore, changing the graph, e.g., hiding/fusing subgraphs, or injecting noise, would be of interest to see how the model depends on the expert choice. It exists a discussion in Appendix C (the impact of the filtering), which is never mentioned in the paper, I would recommend discussing it in the core paper. The impact of some hyperparameters (e.g., l193, trajectories) are also not discussed while they may be of paramount importance. Finally, in 4.2, intermediate baseline models, LSTM + slot-filling without KG would allow to see whether the model is really more understandable with a KG. Here, it is a bit unfair to compare an unconstraint LSTM model with a grammar-based model while assessing for clarity. \n\nB) It may be the most significant paper weakness: positivism. In other words, the model limitations are never discussed, and the analysis remains purely quantitive without any hints for future direction. How hard is the training/tuning? What are the model mistakes? Would it be possible to compute some language statistics? And importantly, how satisfactory are the explanations? So far, the authors only provide an absolute goal score. However, I would tend to think that the current definition may be too light to apprehend the model performance correctly. Something like: would you be able to reconstruct the sequence of actions given the explanation? Are you satisfied with the current explanation? Furthermore, I would also split the analysis into successful/unsuccessful trajectories to better disentangle the soundness and correctness of the explanation. A perfect example would be: the agent was wrong, yet, did its choices make sense? In any case, happy to further discuss this point!\n\nC) So far, the reader can't have any intuition on the model skills. The only actual example is in the user interface in the appendix. Please provide some [5+] cherry-picked text-game trajectories (positive and negative!). This is very useful when supporting l367, for instance.\n\nD) Please add a table with all model parameters in the appendix and [optional] a training curve. I just cannot accept the paper without such things. For instance, the paper does not even provide the discount factor, which is not acceptable from an RL perspective. Please also add more content on the game description in the appendix, e.g. for each game, graph statistics, game interest description etc. Please add the std in Tab2; max is less interesting. \n\nSo again, the paper has many merits. I am leaning toward acceptance despite some potential improvement. However, I cannot accept it without having C and D solved and having a conversation on the model limitation. n/a",
" This paper focuses on learning to play text adventure games using RL while producing immediate step by step explanation, as well as a trajectory level explanation using a KG based state representation combined with Hierarchical Graph Attention, which they dubbed as HEX-RL. Experiments show comparable performance for playing text-based games compared to SOTA while providing improvement in terms of explainability compared to baseline. Pros: \n\n* Paper is very well written and motivated. \n* Authors provide good review of text games' literature.\n* Human evaluation and the metic for comparing intermediate vs trajectory comparison (i.e., Goal context) seem solid and interesting. \n\nCons:\n\n* The novelty is a bit low. The whole ideas used here for immediate explanation and trajectory level explanation are not totally new and novelty is limited. \nIn terms of immediate explanation, the idea of using Hierarchical Graph Attention for explainability is not new and a similar approach has been proposed in [Xu et al.]. Same applies In terms of providing trajectory level explanation. Even though there are some novelty in combining three steps filtering approach, but each of them separately is not novel. Also authors provide an ablation study in Appendix C which was useful experiment, but I am not fully convinced if those three filters are all necessary. I think it would have been interesting to see the performance of just a larger language model (i.e., a larger CALM) for filtering or a fine tuned version to further simplify the design of filtering. \n* I am not fully convinced of this claim in the introduction-- \"study against strong baselines that shows that our agent generates significantly improved explanations\"; would have been nice to see other closely related baseline for Immediate explanation Evaluation such as [Xu et al.]\n\nAdmittedly, I see novelty in combining two papers [Xu et al. + Ammanabrolu et al.] but I think overall novelty is limitted. 1. What is the full action space? Some discussion about it in main paper would be helpful, Also how this method can be applied in settings with large action space. NA"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"H1NX7NE-tMl",
"yC4IL5u9urM",
"Zcw2eb0S_Cn",
"xnyJuKV5UI",
"6K5l-Pi7ejF",
"nips_2022_DSEP9rCvZln",
"Zcw2eb0S_Cn",
"xnyJuKV5UI",
"6K5l-Pi7ejF",
"nips_2022_DSEP9rCvZln",
"nips_2022_DSEP9rCvZln",
"nips_2022_DSEP9rCvZln",
"nips_2022_DSEP9rCvZln"
] |
nips_2022_hciwLGxCt6S | It's DONE: Direct ONE-shot learning with Hebbian weight imprinting | Learning a new concept from one example is a superior function of the human brain and it is drawing attention in the field of machine learning as a one-shot learning task. In this paper, we propose one of the simplest methods for this task with a nonparametric weight imprinting, named Direct ONE-shot learning (DONE). DONE adds new classes to a pretrained deep neural network (DNN) classifier with neither training optimization nor pretrained-DNN modification. DONE is inspired by Hebbian theory and directly uses the neural activity input of the final dense layer obtained from data that belongs to the new additional class as the synaptic weight with a newly-provided-output neuron for the new class, by transforming all statistical properties of the neural activity into those of synaptic weight. DONE requires just one inference for learning a new concept and its procedure is simple, deterministic, not requiring parameter tuning and hyperparameters. DONE overcomes a problem of existing weight imprinting methods that interfere with the classification of original-class images. The performance of DONE depends entirely on the pretrained DNN model used as a backbone model, and we confirmed that DONE with current well-trained backbone models perform at a decent accuracy. | Reject | ## Summary
Humans can learn a new task just from a couple of examples whereas often supervised deep learning models require lots of labeled samples to learn a task. This paper proposes a one-shot learning method inspired by Hebbian learning by adding a new class to the output layer of the network with quantile normalization of the new inputs based on the weights of the last layer that corresponding to the other classes. The proposed approach uses features of a penultimate layer of an “encoder” network (for example EfficientNet). The paper presents results in the k-shot classification setting.
## Decision
This paper studies an important problem and introduces interesting ideas such as quantile normalisation into deep learning models. Nevertheless the paper requires a revision as discussed with other reviewers and as a result it would benefit from another round of reviews. However, during the discussion period none of the reviewers were willing to nominate this paper for acceptance with confidence.
* *Reviewer j4P6* claimed that the authors claims regarding to connections to Hebbian algorithm is flawed and during the discussion period authors agreed their algorithm is not Hebbian. It seems like this is the biggest problem reviewer j4P6 has with theispaper. The reviewer also complained that the paper only has a handwavy explanation of the algorithm and some limited evaluations. It’s a decent contribution overall, but in its current form it does not meet the bar of NeurIPS.
* *Reviewer 8B2d* thinks that this paper needs another revision. The method presented in the article is interesting, but the current manuscript evaluates it with only two models, and the results are better in just one of them. It is therefore not clear whether the improvement is anecdotal or general. Furthermore, the manuscript relates the work to Hebbian learning in the brain, which is irrelevant for presenting their work. The authors agreed to that point when discussing with reviewer j4P6. The reviewer thinks that the article is interesting, but the authors should revise the manuscript, remove the Hebbian analogy and focus on better evaluations, making their statements more sound.
* *Reviewer RJBn*’s score is a “6 - Weak Accept”, but fine with this paper getting rejected. After reading the reviews of other reviewers, RJBn agrees with the other reviewers that the connections drawn to Hebbian learning in the brain are weak, and that the evaluation could be better.
| train | [
"7pRRxMxTsRC",
"7zL5SCJpkCS",
"6Si4SPtKvJ",
"bBJgBScgIKR",
"KjmVUTTngY6",
"SG-1sdvVdq_",
"QZ4SI9K9Nyk",
"HHflK7UMjYZ",
"4Q4tPc_1yiy0",
"WMPDJrLh4Ns",
"VZ0ny8D9quU",
"ENWmtUtFNKg",
"G46I_DmE8NYP",
"8F-C0hW1ue",
"vQdbXCjuDLd",
"VVq0XJG-PtS"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for not only your precious comments and discussion to improve our paper, but also for raising the score.\n\nWe have learned a lot from the discussion with you. Thanks to you, as with our papers, our understanding also progresses, which will be encouraging for future research as well.\n\nWe again sincerely appreciate your wonderful support in your limited time.\n(We hope this reply is within the time limit, and please allow this brief/quick gratitude in comparison to your great contribution.)\n\n\n\n\n\n",
" The authors have been convinced that the method is not Hebbian, and agreed to change the paper accordingly (see the thread under my initial review). They also clarified several parts regarding the performance of the method. \n\nI'm still hesitant to call it a clear accept as the contributions remain limited, but I appreciate the discussion we had, so I'm switching my score from borderline reject (4) to borderline accept (5).",
" Thank you so much for the specific reply. We have finally come to an understanding of your comment. We totally agree with it. Our quantile normalization does not belong to Hebbian theory.\n\n> The bottomline everywhere is pre- and post-synaptic activity for two neurons, not a set of pre-synaptic neurons.\n\nThis must be the key point. We finally understand your point. \n\n> we often must augment Hebbian plasticity with more global forms of synaptic modification\n\nFrom this description, our method is something beyond Hebbian. Now we clearly understand your earlier comment that meant \"before quantile\" was Hebbian but \"quantile and after\" was not.\n\nOur method is certainly Hebbian-inspired and includes Hebbian, but it is wrong to call quantile normalization \"Hebbian weight imprinting\". \n\nTherefore, the title and corresponding text should be changed to \"quantile weight imprinting.\" Also, related statements such as \"the implementation of Hebbian theory\" should be revised. However, these changes are minor and do not affect the main point of our paper (we think as you suggested).\n\nThus if our paper is accepted, we will email the program chairs that we want to change the title at the timing of camera-ready etc. (it seems to have been possible at least in 2021)\n\nAs above, we have finally come to an understanding of your comment. We really appreciate your patient discussion to improve our paper when you must be very busy. \n\nThank you from the bottom of my heart.",
" > synaptic saturation\n\nIt is a physical constraint on the size of the synapse (and the amount neurotransmitters that can fit in) that real neurons do have access to (even if implicitly).\n\n> However, this narrow sense can not apply to the synaptic saturation\n\nWhy not? Synaptic changes can be proportional to some quantity until they reach a fixed limit. I don’t see a contradiction.\n\n> synaptic saturaion vs. quantile normalization\n\nYour algorithm is non-local, unlike synaptic saturation. Hebbian plasticity with synaptic saturation would only use activity of a single pre-synaptic neuron, a post-synaptic neuron, and the size of the synapse itself. Your algorithm would need information about _all_ pre-synaptic neurons and information about other weights in the network to first rank the pre-synaptic neurons, and then communicate the appropriate weights to individual synapses. So it has two issues for real neurons: weights from the rest of the network have to be messaged to the synapses at one particular neuron; pre-synaptic neurons have to be ranked according to that weight information. Both of these things can’t just happen at a neural level, as this is a non-trivial computation. So it’s not similar to synaptic saturation.\n\n> Definitions of Hebbian\n\n1. The classic one from Hebb (https://neurology.mhmedical.com/content.aspx?bookid=3024§ionid=254335819#1180645941):\n> According to Hebb’s rule: “When an axon of cell A … excites cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells so that A’s efficiency as one of the cells firing B is increased.” The key element of Hebb’s rule is the requirement for coincidence of pre- and postsynaptic firing, and so the rule has sometimes been rephrased as “Cells that fire together, wire together.”\n\n2. Introduction of Chapter 8 of Dayan and Abbott:\n> In this chapter we largely focus on activity-dependent synaptic plasticity\nof the Hebbian type, meaning plasticity based on correlations of pre- and\npostsynaptic firing. To ensure stability and to obtain interesting results, we\noften must augment Hebbian plasticity with more global forms of synaptic\nmodification that, for example, scale the strengths of all the synapses onto\na given neuron. These can have a major impact on the outcome of develop-\nment or learning. Non-Hebbian forms of synaptic plasticity, such as those non-Hebbian\nplasticitythat modify synaptic strengths solely on the basis of pre- or postsynaptic\nfiring, are likely to play important roles in homeostatic, developmental,\nand learning processes.\n\nNote the “augment” part for other forms of plasticity, and the “correlation between pre and post”, not just pre-synaptic activity.\n\n3. Nonlinear Hebbian learning https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005070\n\n— this one still uses $\\mathbf{x} f(y)$ for a pre-synaptic vector $\\mathbf{x}$ and post-synaptic scalar $y$. Note no information exchange between synapses.\n\n4. Three-factor rules https://www.frontiersin.org/articles/10.3389/fncir.2015.00085/full#h3\n\nEq. 1 formulates Hebbian learning as $\\Delta w = H(pre, post)$ for _two_ neurons, so it doesn’t depend on the rest of the network. (As an aside, your rule could fit three-factor Hebbian form $H(M, pre, post)$ for modulation $M$, but it’d have to be synapse-specific. Which defeats the purpose of the third factor, so the rule stays as much Hebbian as backprop, i.e., not Hebbian.)\n\nThe bottomline everywhere is pre- and post-synaptic activity for _two_ neurons, not a set of pre-synaptic neurons. Your algorithm needs all pre-synaptic neurons, and also external information, making it non-Hebbian.",
" Thank you very much for your prompt reply.\n\n> 1. results\n> Currently the interference results only have EfficientNet, ...\n\nWe are sorry that our previous response \"Not only with EfficientNet...\" was unclear. This description meant the results of the interference in Figure 3d (and supplementary results that will be added), not Figure 5. Figure 5 does not show the results of class addition and thus the interference is irrelevant, therefore the difference between DONE and Qi's method is not important.\n\nWe used Resnet-12 because it was a standard model used in one-shot learning research. We will add the results with Resnet-50 and VGG16 in Figure 5 (no significant difference between DONE and Qi's method). \n\n\n> I don't think you can claim that what you see for ViT and EfficientNet will...\n\nWe agree with the comment. We already have results on the differences in the distributions of x and w_i vectors that give rise to differences in DONE and Qi's method, also for other DNNs such as ViT-B16, ResNet50, VGG16, and MobileNetV2. All but ViT have right-tailed x distributions and bell-shaped w_i distributions, similar to EfficientNet.\n\nWe will show those results in the supplemental material. We are sorry that we did not explain it to you properly in the previous response. \n\nNevertheless, as you commented, we do not think we can claim that what we see for those DNNs will happen for all transformers and all CNNs, but we think it is much better than EfficientNet alone.\n\n\n\n> 2. Hebbian learning\n> The last sentence contradicts the first,...\n\nThe first sentence is about real neurons, and the last sentence is about implementation in ANN, and thus they are not contradictory.\nThis time we also refer to another method to explain that using the weight information at the implementation does not separate our method from the framework of Hebbian.\n\nFor example, \"synaptic saturation\" is used in some Hebbian implementations.\nReal neurons would not have access to the information of the saturation. However, when implementing it in ANN, an arbitrary maximum value can be used as the saturation.\nThis implementation can be interpreted as the physical constraint of neurons that is inevitably given to real neurons, and the method does not leave the framework of Hebbian.\n\nOur method does something similar. Our method uses weight information just to set the physical constraint of neurons, just like the synaptic saturation.\nFrom the above, the first and last sentences are not contradictory, and the use of the weight information in the implementation does not separate our method from Hebbian.\n\n\n> It doesn't. Nothing in the paper discusses how your idea relates to real neurons.\n> Again, \"uses pre-synaptic activities\" doesn't mean \"Hebbian\"...\n> 3. Backprop I gave an example of an algorithm ...\n\n\nOur method is a weight imprinting that does not modify W_ori (original W matrix), which is explained by pre- (x) and postsynaptic (y) firing.\nIt is not like the Non-Hebbian form such as solely on the basis of presynaptic firing.\nWe think we have already got your understanding that the explanation so far is related to Hebbian.\nTherefore, we think the problem you point out is quantile normalization.\n(By the way, Qi's method changes W_ori)\n\nIn the book you introduced (we will cite it, thank you), there is the following description (page 281 in the book): \n\"General forms of the Hebb rule state that synapses change in proportion to the correlation or covariance of the activities of the pre- and postsynaptic neurons.\"\nHowever, this narrow sense can not apply to the synaptic saturation, which are treated as within Hebbian in the book. Therefore, we do not think your comment is that this narrow sense diverges our method from Hebbian.\n\n\nIn our method, just like the synaptic saturation, we transform xy to $\\\\Delta w$ by using quantile normalization as a computational formulation of the physical constraint of neurons.\nAt the quantile normalization, the strength ranking of xy remains unchanged, and only the scale is changed. The scale transformation is nonlinear, like the synaptic saturation. \nTherefore, like the synaptic saturation, we do not think our method is far from Hebbian theory.\n\nCertainly, quantile normalization is a new process for Hebbian implementation, thus in that sense, it would be necessary to discuss whether it fits within the framework of Hebbian so far.\nTherefore, we believe this discussion is meaningful, and indeed our understanding has also improved, thanks to your comments.\nNow we do not think that our method departs from Hebbian as described above, and so far we have not seen a definition that indicates our method is far from Hebbian (except the narrow definition above).\n\n\nIf you still think our above description is wrong, we would appreciate it if you could point out specifically where it is wrong again, as you did in the previous comments.\n\nThank you very much again for your time and important comments.\n\n",
" 1. results\n> Not only with EfficientNet, but also with most CNNs, a similar thing happens because the statistical features of x and wi vectors are different.\n\nCurrently the interference results only have EfficientNet, however. Fig. 5 suggests that Inception will have a similar results due to lower performance of Qi's method, but not ResNet12. (Btw, I don't recall a single paper that used ResNet12. It's usually ResNet18 or 50.)\n\nI don't think you can claim that what you see for ViT and EfficientNet will happen for all transformers and all CNNs. I don't disagree that it's an interesting difference for these two architectures, and that it justifies the method. However, I consider it a minor improvement over Qi's method as currently it is only happening for one specific architecture. \n\n2. Hebbian learning\n> Real neurons do not need access to the information of weight distributions. Real neurons inevitably satisfy the characteristics of real synaptic strengths, as a physical constraint of x and y neurons. Our method implements this physical constraint of neurons by using quantile normalization.\n\nThe last sentence contradicts the first, as you need weight information for quantile normalization wrt weights.\n\n> Therefore, our paper proposes a more realistic implementation of the Hebbian theory\n\nIt doesn't. Nothing in the paper discusses how your idea relates to real neurons.\n\nAgain, \"uses pre-synaptic activities\" doesn't mean \"Hebbian\". It has a more specific meaning used in neuroscience (see the explanation and the link to a book above), which I could not find in the paper.\n\n3. Backprop\nI gave an example of an algorithm that fits your definition of Hebbian as it uses pre-synaptic activity for weight updates, but is not considered Hebbian by the community.",
" Thank you very much for reading our revised manuscript and for providing more focused and constructive discussion points. \nWe believe here we can reach a shared understanding through our response below, thanks to your clear comments again.\n> Comparison with other methods\n\nDONE is a method for class addition tasks, and accuracy in Fig. 5 is not for showing DONE's advantage but for comparing various backbone DNNs and for evaluating other methods that use optimizations (as written in the paper, and as you probably understand).\n\nOn the other hand, interference in class addition is important for DONE as the class addition is the main task (Fig. 3d). Not only with EfficientNet, but also with most CNNs, a similar thing happens because the statistical features of x and wi vectors are different.\n(we will add the results in the supplemental material).\n\nQi's method misrecognized 10% of the input images as belonging to the new classes, despite the chance level of approximately 1% (8/1008). The addition of new classes actively interfered with the original classification (10 times more than chance), which suggested the situations in which this method could be used were limited. On the other hand, DONE's interference was about the chance level (1%) even with EfficientNet. \n\nDONE and Qi's method showed almost the same results when the backbone DNN was ViT (as you commented, and the reason was also explained in the paper). However, as mentioned above, DONE and Qi's method would give different results in most current backbone DNNs except ViT. In addition, the future backbone DNNs could also make the difference. We believe that there is a big difference between DONE, which can be used non-parametrically for any DNN, and Qi's method, which can only be used with limited DNN such as ViT (just like the difference between parametric and non-parametric statistical tests).\n\nWe believe the discussion here is constructive and we would like to express it in the paper, e.g., by adding a sentence like \"Qi's method misrecognized 10% of the input images as belonging to the new classes despite the chance level of about 1% (8/1008), while DONE's interference was about the chance level (1%).” at line 265.\n> Hebbian learning\n\nThank you again for the clear explanation. With our explanations below, we believe we can reach a shared understanding that our paper presents a new implementation of Hebbian theory. Before the discussion here, we would like to let you know that we think it would be worth changing \"Hebbian weight imprinting\" to \"Quantile weight imprinting\" to avoid confusion if we can change the title at the camera-ready submission or the subsequent submission (it was impossible to change the title at the rebuttal revision), although our paper is actually related to Hebbian theory as described below.\n> but they're re-normalized according to a weight distribution which real neurons wouldn't have access to.\n\nWe think this is the key point, and thus we here focus on it. Real neurons do not need access to the information of weight distributions. Real neurons inevitably satisfy the characteristics of real synaptic strengths, as a physical constraint of x and y neurons. Our method implements this physical constraint of neurons by using quantile normalization. Hebbian theory inevitably involves such transformations (influences) from neural activity to synaptic strength, and our method provides a realistic implementation of this transformation.\n\nConversely, for real neurons, it is impossible to have a linear relationship between neural activity and synaptic strength, although it would be one of the simplest implementations for calculations.\n\nTherefore, our paper proposes a more realistic implementation of the Hebbian theory. Moreover, our paper shows it works better also as a method, because quantile normalization is a non-parametric method and it can realize the above-mentioned realistic implementation with any backbone DNN.\n\nWe again believe this discussion is constructive and would like to reflect it in the paper, e.g., by modifying a sentence at line 58 as \" Here, a problem arises with this simple formulation alone, because neural activity and synaptic weight are different in scale and those relationships would not be linear, not only in real brain but also in DNN.” and adding a sentence like “Note that real neurons do not need access to the information of weight distributions, but they inevitably satisfy the characteristics of real synaptic strengths as a physical constraint of neurons.” at line 76.\n> One hopefully clarifying example is backprop: ...\n\nWe never thought about the relationship between backpropagation and Hebbian theory. We have not yet quite figured it out, but we are excited about the idea. Thank you for giving us the interesting new idea.\n> Small correction in line 236: ...\n\nThank you for the comment. We will correct it.\n\nWe would like to express our gratitude again for your time and comments for the constructive discussion.",
" Thank you for your response! I'm mostly satisfied with the small corrections, but I don't think my main concerns are addressed. So I'm leaving the same score (4, reject).\n\n1. Comparison with other methods\n\nLooking at Fig. 5, the proposed method works basically as well as Qi's method, and worse than some other methods. There's some improvement wrt interference for EfficientNet (Fig. 3d), but I don't think that improvement alone (given exactly the same performance for ViT) justifies a NeurIPS paper. \n\n2. Hebbian learning\n\nI've checked the update explanation, and I still think the method has nothing to do with Hebbian learning. \n\nThe concept of Hebbian learning has a specific meaning in neuroscience (see Chapter 8 in Theoretical Neuroscience by Dayan and Abbott). The main idea is that synaptic changes should depend on activity of the pre- and post-synaptic neurons. Modifications of the standard Hebbian rule like BCM rule, or Oja's rule, or 3-factor Hebbian rules have their own names, but still emphasize the dependence on pre- and post-synaptic activity. In your case, Qi's rule (Eq. 2) is indeed Hebbian -- the post-synaptic neuron is fixed at 1, the pre-synaptic one is fixed at $x$, so the weight becomes $x$. I think that calling quantile normalization Hebbian is a big stretch -- yes, it depends on re-normalized inputs, but they're re-normalized according to a weight distribution which real neurons wouldn't have access to. This breaks any meaningful connections with Hebbian learning as a biological concept. One hopefully clarifying example is backprop: weight changes depend on the activations at the pre-synaptic layer and the propagated error. Despite the dependency on the pre-synaptic activations, it's not considered a Hebbian rule.\n\nSmall correction in line 236: neuronal refers to actual neurons. For artificial neurons the usual word is \"neural\".",
" Thank you very much for reading and commenting on our revised manuscript.\n\nWe are grateful that you, as a reviewer who commented on the essential points, gave us a higher evaluation.\n\nIn addition, we would like to thank you again for raising the following important discussion points.\n\n> However, there are still some flaws. Mainly, the relation the Hebbian learning is irrelevant. The results do not teach anything new about Hebbian learning or the brain. Therefore, the paper's relation to Hebbian learning in that Hebbian learning is an interpretation of the model. As such, its mention belongs to the Discussion. \n\nThank you for your important comments regarding the paper's relation to Hebbian learning. We agree that it is an \"interpretation\" and its mention belongs to the Discussion.\n\nThis paper presents the model's relation to Hebbian learning and the performance of the model. In hindsight (i.e., now), \" Hebbian-inspired model was useful\" can be the same as \"the model built for performance was related to Hebbian.\" Therefore, we think it can be both a \"motive\" and an \"interpretation.\"\n\nWe think both \"the mention belongs to the Introduction\" and \"the mention belongs to the Discussion\" have pros and cons. We think that the easiest way for readers to understand why we introduced Weight imprinting and Quantile normalization among many methods is that the mention belongs to the Introduction. \n\nTherefore, we think that the mention is better in the Introduction, but we believe the discussion here is so constructive that we would like to reflect it in the paper, e.g., it might be worth stating like \"We here explain the interpretation of the relationship between our method and Hebbian theory\" at the part in the Introduction. Also, we think it might be better to add a sentence like \"In this paper, Hebbian theory was described as a source of inspiration and an interpretation of the model, but it is also expected to deepen our understanding of the brain and Hebbian theory through further analysis using various backbones and tasks, as well as neuroscientific experiments” in the “Conclusion and Future work.”\n\n\n> The results do not teach anything new about Hebbian learning or the brain.\n\nWe believe that our interpretation is also useful for understanding the brain (and we feel you also probably mean similar things by \"interpretation\"). For example, suppose someone has created a model that can explain/reproduce an unexplained phenomenon/function. Its explanation/reproduction does not imply that the cause of the phenomenon/function is the same as the principle of the model. However, it amounts to proposing a hypothesis that was not rejected in at least one aspect. Therefore, by conducting next research (e.g., brain experiments) according to that hypothesis, we can make further challenges toward understanding the phenomenon/function.\n\nThis paper is also not useless for understanding the brain. For example, we can study the brain by considering what the new-y addition here is in the brain (ycat, in Figure 1). One might hypothesize that it is a top-down function of the brain. It may allow us to ask about higher-order functions such as the frontal cortex. We might be able to investigate that with fMRI in tasks where humans are learning new concepts.\n\n\n> Another flaw is that the evaluation is limited to only two backbones and one image classification task. As the two backbones show different results, further comparison with other models would have been beneficial.\n\nWe agree with this comment and will add results with other backbones in the supplemental material. In all the CNNs we used, the frequency distributions of x vectors were right-tailed and the distributions of w vectors were bell-shaped. In other words, only ViT showed bell-shaped x vectors.\n\nAlthough not directly related to this paper, we here discuss the x distribution in the brain, relating to the above discussion on understanding the brain. At first, we thought that the right-tailed would be a more suitable distribution of neural firings. However, after considering the possibility that it is bell-shaped, we thought x could be considered as activity of neural clusters instead of neurons. This would be consistent with previous macaque experimental results suggesting that an object was represented by a combination of multiple neural clusters, each representing a visual feature [Tshunoda 2001]. Therefore, it would not be strange for the frequency distribution of x to be bell-shaped like ViT. In this way, it might be interesting to consider the distribution of x in the brain (although we do not argue it would be directly useful).\n\nK. Tsunoda, Y. Yamane, M. Nishizaki, and M. Tanifuji, “Complex objects are represented in macaque inferotemporal cortex by the combination of feature columns,” Nat Neurosci, vol. 4, no. 8, Art. no. 8 , Aug. 2001, doi: 10.1038/90547.\n\nWe sincerely appreciate your time.\n",
" Thank you for addressing my comments. Your updated manuscript is indeed much better than the previous one.\n\nI changed my rating to 5, borderline accept. The paper presents a simple method for defining new classes based on their neural activity in the last layer of a pretrained network. It is a nice small addition to the literature. However, there are still some flaws. Mainly, the relation the Hebbian learning is irrelevant. The results do not teach anything new about Hebbian learning or the brain. Therefore, the paper's relation to Hebbian learning in that Hebbian learning is an $\\textit{interpretation}$ of the model. As such, its mention belongs to the Discussion. Another flaw is that the evaluation is limited to only two backbones and one image classification task. As the two backbones show different results, further comparison with other models would have been beneficial.",
" \nThank you for your comments and essential ideas for improving our manuscript. Now we believe the value of our paper has remarkably increased. \n\nWe really appreciate your idea, in particular, the idea of presenting a comparison with the previous method in figures rather than text is essential for the improvement.\n\nWe thoroughly edited our manuscript according to your comments and ideas as described below.\n\n> Strengths:\n> Introducing quantile normalization in machine learning, a method that was unknown to me.\n\nThank you for your understanding. Quantile normalization actually has been used in machine learning field (very rare though; e.g., Yang and Shami, arXiv.2201.11812.), but has never been used for implementation of Hebbian learning. Because of your and other reviewer's encouraging comments, we decided to emphasize quantile normalization and its application to Hebbian implementation.\n\n\n> The link to Hebbian learning is weak and not meaningful for this paper goals.\n\nWe think this comment is due to our poor description. Our paper provides a new math formation for Hebbian theory and shows that it works. Thanks to the reviewers, the description for the link to Hebbian learning in the revised manuscript is dramatically improved. We believe that the revised manuscript conveys that Hebbian learning is meaningful both for the goals of the paper and researchers related to NeurIPS.\n\n> The idea that the last dense layer of a DNN includes representations that allow novel classifications is not novel, without any model modification, is not novel... (e.g., [1] analyzed it theoretically).\n\nWe agree that those alone are not novel, especially in the fields of metric learning or out-of-distribution detection. Weight imprinting is one of the typical examples to apply it for class addition task. We have included the paper [1] in the part of these explanations in “Introduction” section (line 38). Thank you for giving us a very interesting paper.\n\n> There are many writing problems. ...\n\nThank you for your advice. We thoroughly edited our manuscript. For example, the arrow is not the screen cursor, but we explained it in the text. We agree that such misinterpretations are all due to our poor description. As you pointed out, these problems do not occur if the explanation is in the caption instead of in the text. \n\n\n> The method does not improve upon previous works Figure 2 shows that Qi’s method works similarly or better. I am not convinced that the backbone modifications it does are detrimental.\n\nOur method overcomes a problem with Qi's method, but our writing was bad. To clarify the difference between our method and Qi's method, we have made a major improvement to include the comparison results in all figures except scheme (i.e., Figures 2-5), which is your idea as below. \n\nThe higher accuracy by Qi's method with EfficientNet in Figure 2 does not mean high performance. This is because Qi's method with EfficientNet just tends to respond to any input as a new class. Thus, it is not an advantage but a problem. We added this explanation (line 227).\n\nWe do not think that backbone modifications are detrimental to functionality. We just think it is better not to modify it if it is not necessary/effective (and added a sentence: line 140). We think it is common in modeling.\n\nIn any case, all of these are considered to be misunderstanding due to our bad descriptions, which have been corrected in the revised manuscript.\n\n\n> Accepting the updated paper that was submitted via the supplementary materials, at a later deadline, is unfair to other authors.\n\nWe agree with it. We found a coding error when we were preparing the code for the supplemental material, and we thought it would be fair to let reviewers know it as soon as possible, although those changes did not affect the claims of the paper.\n\n> My main suggestion is that you would make a further effort to convince this method's benefits over Qi's...\n\nWe strongly agree with this comment and idea, and we have revised our paper with this idea. As described above, now all figures (except the scheme) include comparison results with Qi's method. We also added figures to explain why (Figure 1b and 2c). We feel those revisions have dramatically improved the quality of the paper. Thank you for the important and essential idea.\n\n\n> Also, I suggest adding more details about the practical implementation of quantile normalization.\n\nThank you for the suggestion. We agree with it and have added the details (line 156).\n\n> Finally, the paper needs to be thoroughly edited.\n\n\nThank you for allowing us to make major revisions. We indeed thoroughly edited our manuscript according to comments and ideas of you and the other reviewers. We feel that this revision has greatly improved the value of our paper.\n\nFinally, we are very grateful for your time and essential ideas for improving our manuscript, which absolutely increased the value of our paper.\n",
" \nThank you for your to-the-point comments and great ideas for explaining the core part, which taught us not just what to revise but also how to revise the manuscript.\n\nWe really appreciate it, in particular, you gave us a concrete idea of how we should explain the relationship with Hebbian theory, as well as many other necessary comments for improvement.\n\nBelow are our answers to all your comments in \"weaknesses\" and \"questions\".\n\n> This work has \"Hebbian\" in the title and some part of the text, but ...\n\nOur paper provides a new mathematical formation for Hebbian theory and shows that it works, but our writing was bad. As described below, the relationship with Hebbian theory is clearly explained in the revised manuscript, and we feel that the paper has improved dramatically, thanks to your idea. \n\n> Performance of the method is not adequately compared to previous work...\n\nWe agree with this comment. We have made a major improvement to include comparison results with the previous method in all figures except scheme (i.e., Figures 2-5), as described below.\n\n> My current score is 4 (borderline reject). The paper proposed an interesting method, ...\n\nThank you for your precise and encouraging comments. According to your comments, we revised the entire manuscript, as described below.\n\n> Hebbian learning\n> This work does not describe a Hebbian mechanism of learning. Hebbian means dWij=f(xi,yj) ...\n\nThis comment was very informative for us, and we learned a lot from it. There is certainly a strong relationship with Hebbian learning, but we once considered removing the mentions of Hebbian learning, as it was difficult to convey the fact and its importance.\nHowever, because you gave us the idea of how to explain it, and because you and the other reviewers also placed importance on the relationship with Hebbian learning (as NeurIPS), we could work on a major improvement in explaining the relationship.\nNow we believe our revised manuscript thoroughly explains what makes our method Hebbian. (line 48-76 with Figure 1)\n\n> Comparison with previous methods\n> In lines 104-109 the authors try to explain why they’re not comparing their method...\n\nWe agree with the comment. We revised the description (line 113) with a new Figure 5 for the comparison.\n\n> Overall reason behind the method\n> Section 3.1: I did not understand why quantile normalization helps. ...\n\nThank you for the comment. We added new figures (1b and 2c, with consequence 3d, 4c) and descriptions about why quantile normalization helps (line 58-76, 227-239). We now feel that these explanations are useful not only for our method but also for other studies, and that these explanations considerably increase the value of reading this paper.\n\n\n> Lines 135-136: \"the backbone DNN model is a very good model...\n\nWe agree with this comment and have amended this part (line 143).\n\n> 153: what probability distributions?...\n\nThank you for the comment. After taking into account this comment, we have changed the related description (especially we changed “W” to “w_ave”), which changes this \"probability distributions\" to \"frequency distribution\". We think the newly created Figure 1(b) is useful also in clarifying what kind of distribution we are dealing with.\n\n> 166: \"The range of applicable models is yet unclear...\n\nWe agree with the comment and have removed this sentence. As described below, we added a subsection “Limitations, applications, and potential negative societal impacts” (line 167) and replaced it with sentences that would be more important as you suggested.\n\n> 223-224: what’s good accuracy for practical uses? ...\n\nWe agree with the comment and have deleted this expression.\n\n\n> Small corrections\n> Note: the paper contains a lot of grammatical errors. ...\n\n> Line 15 (abstract): performs \"at a\", not \"a\", practical-level accuracy?\n\n> Also next sentence: Can write as \"DONE overcomes ...\n\n> 92: uses \"a\" classification\n\n> 93: aim\"s\"\n\n> 145: applies to, not apply into\n\n> 212-214: black and orange said twice\n\nWe agree with all these comments and have revised the manuscript according to them. Thank you for the comments.\n\n> Why is the supplementary material just a corrected version of the main paper?\n\nWe thought that the information that is not directly related will use the time of the reviewers, but now we add 3 supplementary figures in this revised submission.\n\n\n> Limitations:\n> The authors can mostly addressed the limitations of their work (although see below), ...\n\n> Checklist 1C: the described model can be easily adapted for face recognition. ...\n\n> Related, the authors refer to practical uses throughout the whole paper, ...\n\nWe agree with these comments and have added and revised the descriptions in the revised manuscript. Thank you for the idea and suggestion.\n\nFinally, we greatly appreciate the time you spent reviewing our paper, which was definitely essential to improve the core of the paper.\n",
" \nThank you for your encouraging and productive comments and ideas. Your comments have dramatically improved our paper. \n\nWe really appreciate the comments, in particular, your comments convinced us of the value of explaining Hebbian theory more, not decreasing the Hebbian-related description, as well as the many necessary comments to improve the manuscript.\n \nBelow we provide point-by-point responses to all comments in \"weaknesses\" and \"questions\".\n \n> The authors compare their method with only one method, when it would have been helpful for readers to compare to a broader range of existing one-shot learning approaches for image classification. ...\n \nWe agree with the comment. We revised the description (line 113) and showed those results in newly created Figure 5.\n \n> While the algorithm from the paper is novel and undoubtedly very elegant, it is very similar to Qi’s method and other papers that the authors cite. In this sense, the paper is not a must-read for most researchers in the community.\n \nOur paper provides a new mathematical formation for Hebbian theory and shows that it works. Thanks to your comments, now the paper includes better descriptions about the relationship with Hebbian theory and the new implementation for it, which is not in the previous paper. Now we can believe our paper is a must-read for most researchers in the community.\n \n> The authors write on line 31 that “The human brain does not necessarily have more complex processes than DNNs…”. I recommend to remove this statement. Many researchers would disagree with the statement, and the statement is not essential to the paper.\n \nWe agree with the comment and have removed this sentence. We are rather glad that many researchers would consider that the brain has more complex features, because it will encourage our future work. (Although unrelated to this paper, we personally would like to challenge dynamic and complex features of the brain.)\n \n> The authors write on line 33 that “a series of simple processes such as linear filtering followed by a nonlinearity can describe the function of lower visual cortex”. I recommend to rephrase this slightly, as learning in the lower visual cortex is part of the lower visual cortex’s function, but that aspect is not described by a series of simple processes such as linear filtering followed by a nonlinearity.\n \nWe agree with the comment and have modified the description (almost removed) and its place. (line 36)\n \n> “modify” on line 48 should be “modifies”\n \nThank you for pointing out our mistake. It is corrected in the revised manuscript.\n \n> On line 61, the authors write “neural activity and synaptic strength are different in dimension.” Do the authors mean “neural activity and synaptic strength are different in scale.”? Line 157 also mentions “different dimension” when the authors may have meant “different scale”.\n \nThe comment is exactly right. \"Scale\" must be a much better wording and we have modified it throughout the paper. Thank you for your understanding and improvement ideas.\n \n> On line 135, the authors write “the backbone DNN model is a very good model as a heritage of mankind”. I did not understand this sentence. Could the authors rephrase it?\n \nWe agree with this comment and have amended this part (line 143).\n \n> Figure 2 is impossible to understand without referring to explanations of data point markers in section 4.1, which means that a reader needs to jump back and forth between the Figure 2 and the text on the page after. I recommend explaining the figure markers in Figure 2’s caption instead of in section 4.1.\n \nWe agree with this comment and have added markers explanation in the figure caption/legend. We also applied the same revision to all the other figures. Thank you for your improvement ideas.\n \n> On line 185, the authors write “(not 1008 classes here)”. At this point, the reader has not yet read about the 1008-class experiments. How about removing the comment in parentheses?\n\n> The EfficientNet architecture is misspelled in a number of places in the paper, e.g. as “EfficinetNet” and “EfficientNnet”\n\n> Koray Kavukcuoglu’s name should be upper-cased on line 377.\n \nWe agree with these comments and have revised the manuscript according to them.\n \nFinally, we would like to express our gratitude again for the opportunity to have your precious comments, which not only greatly contributed to the improvement of our paper but also strongly encouraged us.\n\n",
" The authors present Direct ONE-shot learning with Hebbian imprinting (DONE) a method for one-shot learning inspired by Hebbian learning in the brain. The method uses neural activations from the final layer of an “encoder” network (such as a vision transformer or EfficientNet) on a single example from an unseen class to create a weight vector for the class of the new class. The presented method is closely related to and improves upon a that of a 2018 paper by Qi et al, which the authors cite. \n\nThe authors present experiments in which DONE is compared to Qi’s method. Specifically, the authors one-shot-learn 1 or 8 classes, evaluate the performance on those classes, and measure the degree to which the new classes interfere with initially trained classes. The authors also present results on k-shot learning. \n Strengths:\n1. The paper presents a novel and simple method for few-shot learning. \n2. While the paper does not claim to model the brain, it is exciting to see that one-shot learning with brain-line Hebbian imprinting can work so well. \n3. This paper is a pleasure to read. It is well-structured, and the writing is mostly clear. \n\nWeaknesses\n1. The authors compare their method with only one method, when it would have been helpful for readers to compare to a broader range of existing one-shot learning approaches for image classification. The authors state “It is meaningless to compare the above three approaches with weight imprinting, because weight imprinting does not contain any optimization algorithm. “ . I disagree with this, for two reasons. First, papers such as the one about one-shot learning with siamese networks (https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf), which the authors cited) are quite similar to DONE in that a big model gets trained on large amounts of data in a time-consuming process, and later one-shot classification becomes cheap. In other words, both were optimized at some point. Second, even if two models fall into different categories (e.g. because one is much more computationally expensive than the other), it’s useful for readers to know how much accuracy they lose (if any) by using a more flexible methodology. \n2. While the algorithm from the paper is novel and undoubtedly very elegant, it is very similar to Qi’s method and other papers that the authors cite. In this sense, the paper is not a must-read for most researchers in the community. \n 1. The authors write on line 31 that “The human brain does not necessarily have more complex processes than DNNs…”. I recommend to remove this statement. Many researchers would disagree with the statement, and the statement is not essential to the paper. \n\n2. The authors write on line 33 that “a series of simple processes such as linear filtering followed by a nonlinearity can describe the function of lower visual cortex”. I recommend to rephrase this slightly, as learning in the lower visual cortex is part of the lower visual cortex’s function, but that aspect is not described by a series of simple processes such as linear filtering followed by a nonlinearity. \n\n3. “modify” on line 48 should be “modifies”\n\n4. On line 61, the authors write “neural activity and synaptic strength are different in dimension.” Do the authors mean “neural activity and synaptic strength are different in scale.”? Line 157 also mentions “different dimension” when the authors may have meant “different scale”. \n\n5. On line 135, the authors write “the backbone DNN model is a very good model as a heritage of mankind”. I did not understand this sentence. Could the authors rephrase it?\n\n6. Figure 2 is impossible to understand without referring to explanations of data point markers in section 4.1, which means that a reader needs to jump back and forth between the Figure 2 and the text on the page after. I recommend explaining the figure markers in Figure 2’s caption instead of in section 4.1. \n\n7. On line 185, the authors write “(not 1008 classes here)”. At this point, the reader has not yet read about the 1008-class experiments. How about removing the comment in parentheses?\n\n8. The EfficientNet architecture is misspelled in a number of places in the paper, e.g. as “EfficinetNet” and “EfficientNnet”\n\n9. Koray Kavukcuoglu’s name should be upper-cased on line 377. The authors have adequately addressed the limitations and potential negative societal impact of their work.",
" The paper proposes a one-shot learning mechanism that adds a new class to the network's output using quantile normalization of the new input (to the last layer) w.r.t. weights in the last layer that correspond to the old classes. ## Strengths:\n1. The proposed one-shot classification method is simple and, unlike previous work, doesn't require layer normalization in the second last layer.\n2. The method works reasonably well.\n3. The method is novel, as far as I know.\n\n## Weaknesses:\n1. This work has \"Hebbian\" in the title and some part of the text, but as far as I can tell has nothing to do with Hebbian learning. (Elaborated in the Questions section)\n2. Performance of the method is not adequately compared to previous work. (Again, elaborated below) \n\n## Summary\n\nMy current score is 4 (borderline reject). The paper proposed an interesting method, but it doesn't properly explain why it works, and it doesn't justify the lack of comparison (in terms of performance and computational complexity) with previous methods. I'm willing to increase the rating if my points are addressed, however. ## Large concerns\n### Hebbian learning\nThis work does not describe a Hebbian mechanism of learning. Hebbian means $\\Delta W_{ij} = f(x_i, y_j)$ for input neuron $x_i$ and output neuron $y_j$ (in the classical Hebbian sense, $f(x, y)=xy$). Eq. 3 is fundamentally not Hebbian because it uses information about other weights. Lines 154-155 say quantile normalization \"is suitable for implementing Hebbian theory\", but this claim is not backed up. The authors should either thoroughly explain what makes their method Hebbian, or remove mentions of Hebbian learning.\n\n### Comparison with previous methods\nIn lines 104-109 the authors try to explain why they’re not comparing their method to previous work directly:\n> It is meaningless to compare the above three approaches with weight imprinting, because weight imprinting does not contain any optimization algorithm. Therefore, in principle, there is no reason for weight imprinting methods to outperform other methods by themselves in accuracy. The performance of weight imprinting methods is uniquely determined by the backbone DNN without any randomness, hence its performance is suitable as a reference baseline for other methods. Thus, weight imprinting does not aim for the highest accuracy but for practical convenience and reference role as a baseline method.\n\n– I can agree with not chasing the best accuracy, but then there must be a discussion of concrete practical benefits of the method. (No optimization != faster, as quantization takes some time too.)\n\n### Overall reason behind the method\nSection 3.1: I did not understand why quantile normalization helps. Shouldn’t it make the response in each neuron $y$ similar to that of the new class? Either way, the authors should spend more time explaining their method and why it works, as it is the core contribution.\n\n## Unclear parts in the text\n\nLines 135-136: \"the backbone DNN model is a very good model as a heritage of mankind and should not be changed as much as possible (especially for many non-expert users)\"\n– this is not an explanation.\n\n153: what probability distributions? Everything has been deterministic so far, so it's worth explaining what distributions are discussed here. Is it over individual weights? Individual neurons?\n\n166: \"The range of applicable models is yet unclear, but in principle it is wider than Qi’s method.\"\n– what does it mean? The line before said that you need a network with a dense final layer, which clearly defines the range of models.\n\n223-224: what’s good accuracy for practical uses? It’s never explained how the authors came up with that value\n\n## Small corrections\n**Note**: the paper contains a lot of grammatical errors. One simple way to fix it is to copy-paste the source code into a google doc and go through all suggested corrections.\n\nLine 15 (abstract): performs \"at a\", not \"a\", practical-level accuracy?\n\nAlso next sentence: Can write as \"DONE overcomes ...mentioned issues of DNNs...\"; Currently the sentence doesn’t read well.\n\n92: uses \"a\" classification\n\n93: aim\"s\"\n\n145: applies to, not apply into\n\n212-214: black and orange said twice\n\nWhy is the supplementary material just a corrected version of the main paper?\n The authors can mostly addressed the limitations of their work (although see below), but they did not account for the potential negative impacts. I'm not flagging it for an ethics review as it's just a small correction though.\n\nChecklist 1C: the described model can be easily adapted for face recognition. This does imply potential negative societal impacts and should be discussed by the authors, even though previous work can be used in a similar way.\n\nRelated, the authors refer to practical uses throughout the whole paper, in particular when talking about performance of DONE. But it’s never discussed what those are, and why the achieved performance of the method is good enough for them.",
" The paper presents a method for the one-shot classification of novel inputs. For the classification of novel inputs, the method uses representations obtained by the final dense layer of a pretrained backbone model to create novel weight vectors of new classes. Importantly, DONE does not optimize or change the backbone model in any way, making it applicable to various backbone models and computationally efficient. Strengths:\n1) Introducing quantile normalization in machine learning, a method that was unknown to me.\n\nWeaknesses:\n1) The link to Hebbian learning is weak and not meaningful for this paper goals.\n2) The idea that the last dense layer of a DNN includes representations that allow novel classifications is not novel, without any model modification, is not novel (e.g., [1] analyzed it theoretically).\n3) There are many writing problems. Too many to mention them all. Examples: a. talking about the heritage of mankind in line 135 (unprofessional); b. figure 2 has an irrelevant screen cursor at the right-most data point. More importantly, essential legend details of this figure appear in the text instead of the caption, which is confusing; c. “the simplest” in line 324 is also not professional (I think that Qi’s method is simpler). \n4) The method does not improve upon previous works Figure 2 shows that Qi’s method works similarly or better. I am not convinced that the backbone modifications it does are detrimental.\n5) Accepting the updated paper that was submitted via the supplementary materials, at a later deadline, is unfair to other authors.\n\n[1] Sorscher, B., Ganguli, S., & Sompolinsky, H. (2021). The geometry of concept learning. bioRxiv. My main suggestion is that you would make a further effort to convince this method's benefits over Qi's. In the current paper, the benefits are argued in text. Try to argue using a test where Qi's modification to the backbone model impairs performance, while DONE does not.\n\nAlso, I suggest adding more details about the practical implementation of quantile normalization.\n\nFinally, the paper needs to be thoroughly edited. The paper adequately discusses the method limitations."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"7zL5SCJpkCS",
"vQdbXCjuDLd",
"bBJgBScgIKR",
"KjmVUTTngY6",
"SG-1sdvVdq_",
"QZ4SI9K9Nyk",
"HHflK7UMjYZ",
"ENWmtUtFNKg",
"WMPDJrLh4Ns",
"VZ0ny8D9quU",
"VVq0XJG-PtS",
"vQdbXCjuDLd",
"8F-C0hW1ue",
"nips_2022_hciwLGxCt6S",
"nips_2022_hciwLGxCt6S",
"nips_2022_hciwLGxCt6S"
] |
nips_2022_ZidkM5b92G | BagFlip: A Certified Defense Against Data Poisoning | Machine learning models are vulnerable to data-poisoning attacks, in which an attacker maliciously modifies the training set to change the prediction of a learned model. In a trigger-less attack, the attacker can modify the training set but not the test inputs, while in a backdoor attack the attacker can also modify test inputs. Existing model-agnostic defense approaches either cannot handle backdoor attacks or do not provide effective certificates (i.e., a proof of a defense). We present BagFlip, a model-agnostic certified approach that can effectively defend against both trigger-less and backdoor attacks. We evaluate BagFlip on image classification and malware detection datasets. BagFlip is equal to or more effective than the state-of-the-art approaches for trigger-less attacks and more effective than the state-of-the-art approaches for backdoor attacks. | Accept | The paper proposes a new method for certified defense against data poisoning, in both trigger-less and backdoor scenarios. The method augments previous work (Bagging) with random flipping of labels. The latter enables computation of probabilistic certificates, although this results in a huge computational overhead. Various relaxation techniques are proposed to improve the computational burden, bringing the cost to just one order of magnitude "above" the baseline. Experiments show reasonable improvement of defense strength compared to the baselines, although the computational cost remains an issue. Apart from its incremental character and the computational complexity, the method is well executed and theoretically sound. | train | [
"8rHyuP7pKgg",
"nu7HcRR38x",
"3iKpOmww-vY",
"1LbGgagkRbT",
"g_lsCN6Gvxy",
"EfKNTQ_86q0",
"RHNPmV5Ydx1",
"_MO-8-hcQGO",
"-NpHci-nT5",
"Otam2dF5op3"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. The authors clarify that the novelty of the method is a novel smoothing distribution combining the distributions of Bagging and FeatFlip. I realized the proposed method's difference through the authors' rebuttals, and I would like to reconsider the contribution (2 fair) of the original review.\n\n2. Thanks to the authors for their efforts on Q2 (In terms of malware detection, are data poisoning attacks observed in the real world?) of the original review. I found that ML-based approaches have already been commercialized and, in particular, [15,16] was very helpful in understanding the article's contribution.\nBesides, when I self-criticize the original review, I think Q2 of the original review is out of the scope of the article. As the authors claim, there are no cases of data poisoning attacks in the real world yet, but I agree that defenses against them will be helpful in the future. \n",
" Dear Reviewer qMzU,\n\nAs the end of the discussion is approaching, we kindly ask you to consider our responses to your concerns. We are very thankful for your comments and suggestions that helped improve our paper.\n\nBest regards,\nAuthors of Paper3023.",
" Thanks for summarizing the computational costs of the different approaches. It would be great to see this in an appendix.\n\n> We compare with Wang, Cao, Jia & Gong (2020) (we call their approach FeatFlip) as a baseline in section 7.2\n\nSorry, I somehow missed this. Thanks for clarifying.",
" Thanks for the detailed response. I have updated the rating score accordingly.",
" We thank reviewer qMzU for their comments and questions. \n\n>**Compared to FeatFlip, what is the technical novelty of the proposed method?**\n\nFeatFlip, RAB, and Bagging all work in the framework of randomized smoothing. Our first contribution is a novel smoothing distribution that combines the distribution used in Bagging and the distribution used in FeatFlip. This combination allows us to defend against trigger-less and backdoor attacks. However, this combination also makes it challenging to compute the certified radius due to the combinatorial explosion of the sample space. Our second contribution is a partitioning strategy for the Neyman--Pearson lemma that results in an efficient certification algorithm and a relaxation of the Neyman--Pearson lemma that further speeds up certification.\n\nRandomized smoothing approaches all suffer from the curse of dimensionality [1]. Compared to FeatFlip, our smoothing distribution adds a bagging strategy, which is crucial to reducing the dimensionality of the sample space, thus, improving the scalability and effectiveness of the certification algorithm. As a result, \n* FeatFlip does not scale to the full MNIST dataset, as FeatFlip needs approximately 8000 TB memory to compute the certified radius. \n* BagFlip significantly outperforms FeatFlip against $FL_1$ on MNIST-17, as shown in Table 2.\n\n[1] Aounon Kumar, Alexander Levine, Tom Goldstein, and Soheil Feizi. Curse of dimensionality on randomized smoothing for certifiable robustness. ICML2020\n\n>**In terms of malware detection, are data poisoning attacks observed in the real world?**\n\nThe malware detection approaches can be divided into dynamic and static ones. The static approaches process executable files without running them, extracting the features used for classification directly from the binary and its meta-data. As static approaches, like querying from a maintained malware database, are still used, ML-based approaches have been studied [2,3,4,5,6,7] and deployed in commercial endpoint protection solutions [8,9,10].\nThe research community has studied evasion attacks [11,12,13] and data poisoning attacks [14] on malware detection. Evasion attacks have successfully attacked open-source and commercial malware detectors [15,16]. Although we cannot find articles about data poisoning attacks observed in the real world, we believe data poisoning attacks can likely break malware detectors in the future, as evasion attacks have done before.\n\n\n[2] Zheng Leong Chua, Shiqi Shen, Prateek Saxena, and Zhenkai Liang. Neural Nets Can Learn Function Type Signatures From Binaries. In USENIX Security Symposium, 2017\n\n[3] Marek Krcál, Ondrej Švec, Martin Bálek, and Otakar Jašek. Deep Convolutional Malware Classifiers Can Learn from Raw Executables and Labels Only. ICLR 2018\n\n[4] Enrico Mariconti, Lucky Onwuzurike, Panagiotis Andriotis, Emiliano De Cristofaro, Gordon Ross, and Gianluca Stringhini. MaMaDroid: Detecting Android Malware by Building Markov Chains of Behavioral Models. Network and Distributed System Security Symposium 2017\n\n[5] Igor Santos, Felix Brezo, Xabier Ugarte-Pedrero, and Pablo G. Bringas. Opcode sequences as representation of executables for data-mining-based unknown malware detection. Information Sciences, 2013.\n\n[6] Joshua Saxe and Konstantin Berlin. Deep neural network based malware detection using two dimensional binary program features. MALWARE 2015\n\n[7] Andrii Shalaginov, Sergii Banin, Ali Dehghantanha, and Katrin Franke. Machine Learning Aided Static Malware Analysis: A Survey and Tutorial. In Ali Dehghantanha, Mauro Conti, and Tooska Dargahi, Cyber Threat Intelligence, 2018.\n\n[8] https://www.microsoft.com/security/blog/2017/12/11/detonating-a-bad-rabbit-windows-defender-antivirus-and-layered-machine-learning-defenses/\n\n[9] https://www.blackberry.com/us/en/products/cylance-endpoint-security/cylance-is-blackberry-cybersecurity\n\n[10] https://www.fireeye.com/blog/products-and-services/2018/07/malwareguard-fireeye-machine-learning-model-to-detect-and-prevent-malware.html\n\n[11] Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndic , Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion Attacks against Machine Learning at Test Time. Advanced Information Systems Engineering, 2013.\n\n[12] Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. Adversarial Examples for Malware Detection. ESORICS 2017\n\n[13] Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Da- vide Maiorca, Giorgio Giacinto, Claudia Eckert, and Fabio Roli. Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables. EUSIPCO 2018.\n\n[14] Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers, Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea, 30th USENIX Security Symposium, USENIX Security 2021\n\n[15] https://www.elastic.co/blog/machine-learning-static-evasion-competition\n\n[16] https://skylightcyber.com/2019/07/18/cylance-i-kill-you/\n",
" We thank reviewer 6p2x for their time and expertise. In the paper, we will add runtime analysis and the experiments suggested in the second question.\n\n>**Comparison with baseline about the runtime.**\n\nThe runtime of training (about 16 hours for 1000 classifiers on MNIST on a single GPU) is similar to baselines because BagFlip only adds noise to the bags of the training data. \n\nAt inference time, BagFlip first evaluates the predictions of N classifiers, and counts the number of the majority label as $N_1$ and the number of the runner-up label as $N_2$. Then, BagFlip uses a pre-computed lookup table to query the certified radius by $N_1$ and $N_2$. The inference time for each example contains the evaluation of N classifiers and an O(1) table lookup. Hence, there is no difference between BagFlip and other randomized smoothing baselines. \n\nThe main computational cost lies in the pre-computation of the lookup table. We show the computational cost of the table for the MNIST dataset as follows (single CPU time):\n* Bagging: 16 seconds\n* BagFlip (with $\\delta=1e-4$ in Section 6): 1.9 hours\n* BagFlip (without the technique in Section 6): ~85 hours (We report the estimated runtime because we cannot finish the experiment.)\n* FeatFlip needs approximately 8000 TB of memory to compute the table. Thus, FeatFlip is infeasible to run on the full MNIST dataset. FeatFlip is only evaluated on a subset of the MNIST-17 dataset containing only 100 training examples. \n* RAB does not need to compute the lookup table because it has a closed-form solution for computing the certified radius. \n\nThus, we draw the following conclusions from the runtime analysis,\n1. BagFlip has similar training and inference time compared to other baselines.\n2. With the technique proposed in Section 6, BagFlip needs more pre-computation time than Bagging and RAB. We argue that the pre-computation is feasible because it only takes 12% of the time when compared to training.\n3. The technique in Section 6 is useful to reduce the pre-computation time (from >85 hours to 1.9 hours).\n4. BagFlip is more scalable than FeatFlip.\n \n>**Compare with baselines while keeping the normal accuracy the same.**\n\nWe thank the reviewer for suggesting this. We tried it now and BagFlip is still better than Bagging (table below). Note that BagFlip is a generalization of Bagging—i.e., Bagging is the same as BagFlip if the noise level alpha=0. In the paper, we will add experiments for MNIST and EMBER when comparing BagFlip with Bagging. We note that this kind of comparison is time-consuming and needs many rounds of tuning to ensure the normal accuracy of the two approaches are the same, so we did not conduct this kind of experiment in the short time given to write the rebuttal for other datasets and baselines. \nFor MNIST, we find Bagging with k=80 achieves similar normal accuracy compared to BagFlip-0.9 (k=100) on $F_{1}$: \n\n| R | 0 | 0.05 | 0.1 | 0.15 | 0.2 | 0.25 |\n|-------------------|-----------|-----------|-----------|----------|-----|------|\n| Bagging k=80 | 93.58 | 71.11 | 0 | 0 | 0 | 0 |\n| BagFlip-0.9 k=100 | **93.62** | **75.95** | **27.73** | **4.02** | 0 | 0 |\n\nFor EMBER, we find Bagging with k=280 achieves similar normal accuracy compared to BagFlip-0.95 (k=300) on $F_{1}$: \n\n| R | 0 | 0.07 | 0.13 | 0.20 | 0.27 | 0.33 |\n|--------------------|-----------|-----------|-----------|-----------|------|------|\n| Bagging k=280 | 79.06 | 75.32 | **70.19** | 14.74 | 0 | 0 |\n| BagFlop-0.95 k=300 | **79.17** | **75.93** | 69.30 | **57.36** | 0 | 0 |\n",
" We thank reviewer vQcS for their time and expertise. In the paper, we will add runtime analysis and discuss the speculated reasons why BagFlip does not work for higher-dimensional datasets. \n>**Comparison with baseline about the runtime.**\n\nThe runtime of training (about 16 hours for 1000 classifiers on MNIST on a single GPU) is similar to baselines because BagFlip only adds noise to the bags of the training data. \n\nAt inference time, BagFlip first evaluates the predictions of N classifiers, and counts the number of the majority label as $N_1$ and the number of the runner-up label as $N_2$. Then, BagFlip uses a pre-computed lookup table to query the certified radius by $N_1$ and $N_2$. The inference time for each example contains the evaluation of N classifiers and an O(1) table lookup. Hence, there is no difference between BagFlip and other randomized smoothing baselines. \n\nThe main computational cost lies in the pre-computation of the lookup table. We show the computational cost of the table for the MNIST dataset as follows (single CPU time):\n* Bagging: 16 seconds\n* BagFlip (with $\\delta=1e-4$ in Section 6): 1.9 hours\n* BagFlip (without the technique in Section 6): ~85 hours (We report the estimated runtime because we cannot finish the experiment.)\n* FeatFlip needs approximately 8000 TB of memory to compute the table. Thus, FeatFlip is infeasible to run on the full MNIST dataset. FeatFlip is only evaluated on a subset of the MNIST-17 dataset containing only 100 training examples. \n* RAB does not need to compute the lookup table because it has a closed-form solution for computing the certified radius. \n\nThus, we draw the following conclusions from the runtime analysis,\n1. BagFlip has similar training and inference time compared to other baselines.\n2. With the technique proposed in Section 6, BagFlip needs more pre-computation time than Bagging and RAB. We argue that the pre-computation is feasible because it only takes 12% of the time when compared to training.\n3. The technique in Section 6 is useful to reduce the pre-computation time (from >85 hours to 1.9 hours).\n4. BagFlip is more scalable than FeatFlip.\n\n>**Is there a reason why the flipping approach by Wang, Cao, Jia & Gong (2020) was not considered as a baseline in the experiments?**\n\nWe compare with Wang, Cao, Jia & Gong (2020) (we call their approach FeatFlip) as a baseline in section 7.2. BagFlip is more scalable than FeatFlip, and BagFlip significantly outperforms FeatFlip against $FL_{1}$ on MNIST-17. \n\n>**In Section 7.2, BagFlip is not found to be effective against backdoor attacks for CIFAR10 and EMBER. Are the authors able to speculate why BagFlip doesn’t work for the higher-dimensional datasets?**\n\nWe speculate there are two possible reasons. \n1. Randomized smoothing has a non-negligible trade-off between normal accuracy and certified radius for evasion attacks. We still see the gap in the results between higher-dimensional datasets like CIFAR10 in the state-of-the-art randomized smoothing tools [1]. As the backdoor attack is stronger than the evasion attack, we can expect that the gap still exists. \n2. The smoothing distribution of BagFlip samples a sub-training set much smaller than the original dataset. As the higher-dimensional datasets require more training data, the sub-training set can hurt the normal accuracy of the model.\n \n[1] Double Sampling Randomized Smoothing,Linyi Li, Jiawei Zhang, Tao Xie, Bo Li, ICML2022\n\n>**I noticed the use of a superscript “?” in line 171 and elsewhere. Is that intentional?**\n\nThey are intentional. We use the superscript “?” to denote a worst-case algorithm. We will add the missing definition of this notation. \n",
" This paper proposes a certified defense against triggerless and backdoor data poisoning attacks. The authors consider a threat model where the attacker can poison training instances (and the test instance for the backdoor attack) by flipping up to s of the features/label per instance. The defense is based on randomized smoothing, where the smoothing operation involves bagging of the training set and random flipping of features/labels. The certificate provides a (probabilistic) guarantee that up to R instances can be poisoned without changing the model’s prediction on a test instance. The derivation and computation of the certificate (i.e. R) is quite involved, and various relaxations are considered to speed up the computation. Experimental results demonstrate improved certified accuracies compared to bagging alone for triggerless attacks, when the fraction of poisoned instances is high and the number of flips is small. Results for backdoor attacks vary depending on the dataset. *Originality*\n\nThe BagFlip method proposed in this paper seems to be a hybrid of methods by Jia, Cao & Gong (2021) and Wang, Cao, Jia & Gong (2020), in the sense that it combines bagging and feature/label flipping. In this sense, it could be viewed as a more incremental paper. However the authors should be credited for the derivation of the certificate, which was quite complex in the bagging/flipping setting. Another benefit of their method is that it encompasses both triggerless and backdoor threat models.\n\n*Quality*\n\nI was generally impressed by the quality of the paper, although I was not able to check the derivations in Sections 5 and 6 carefully. It was good to see a variety of datasets were tested in the experiments, as the conclusions did vary in some cases.\n\n*Significance*\n\nThe proposed method (BagFlip) seems to perform similarly to Bagging (Jia, Cao & Gong, 2021) for triggerless attacks in regimes where the certified accuracy remains reasonably high (e.g. above 70%). Given this observation, it’s not clear whether BagFlip should be preferred over Bagging. For instance, if the smoothing operation and certificate for BagFlip is significantly more computationally demanding, then BagFlip may not be preferred. It would be interesting to investigate this further, e.g. by comparing the computational cost of BagFlip versus baselines. \n\n*Clarity*\n\nI enjoyed reading the paper. Sections 5 and 6 were somewhat challenging to read, but that is probably unavoidable due to the complexity of the analysis. Is there a reason why the flipping approach by Wang, Cao, Jia & Gong (2020) was not considered as a baseline in the experiments?\n\nIn Section 7.2, BagFlip is not found to be effective against backdoor attacks for CIFAR10 and EMBER. Are the authors able to speculate why BagFlip doesn’t work for the higher-dimensional datasets?\n\nI noticed the use of a superscript “?” in line 171 and elsewhere. Is that intentional? These are discussed adequately in Section 8.",
" This paper presents BagFlip, a model-agnostic certified approach that utilizes bagging and randomized smoothing to defend against various types of poisoning and backdoor attacks. This paper formulates the theoretical way to compute the certified radius of BagFlip and ways to speed up the computation. This paper evaluates BagFlip in various settings to show its effectiveness. Strengths:\n\n+ Strong attack model. \n+ This paper provides a certified guarantee in defending against backdoor attacks. \n+ The threat model and the goals are formulated clearly. \n\nWeaknesses:\n- Potential runtime problem. Could authors provide empirical evaluations for the running time?\n\n- The explanations of experimental results are unclear. When compared with bagging, the same k is used for both bagging and BagFlip. However, BagFlip adds additional Gaussian noise to the k training examples. Therefore, the noise added by BagFlip is larger than bagging. As a result, the normal accuracy for bagging is higher compared with BagFlip. Could authors use a smaller k for bagging such that bagging and BagFlip have similar normal accuracy? Given the similar normal accuracy, we can compare the robustness of the two methods. This is also applicable in the comparison to FeatFlip and RAB. See above. See above.",
" The article proposes a model-agnostic certified method to handle data-poisoning attacks that maliciously modify the training dataset. The article addresses trigger-less and backdoor attacks, and solves them with a noising/smoothing approach on training datasets. The proposed method was evaluated on image and malicious portable executable datasets including MNIST, CIFAR10, and EMBER, and showed robust accuracy in trigger-less and backdoor attacks, respectively. (strength)\nThe article is well-structured, easy-to-follow. The proposed method (BagFlip) is technically sound.\n(weakness)\nThe proposed method was compared only for a limited control group (FeatFlip[35] and RAB[39]). Considering that FeatFlip[35] proposed a randomized smoothing strategy, the technical novelty of the proposed method is weak.\n 1. Section 7.2 describes the differences in the proposed method compared to FeatFlip. However, considering that FeatFlip proposed a randomized smoothing strategy, utilizing additional noise to cope with trigger-less attack seems like a minor modification of FeatFlip. Compared to FeatFlip, what is the technical novelty of the proposed method?\n2. While it is true that inductive learning approaches (including machine learning and deep learning) are being explored, many malware detection software is still based on traditional search/traverse algorithms based on signatures. In terms of malware detection, are data poisoning attacks observed in the real world?\n The proposed method uniformly flips features and labels regardless of data distribution."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"nu7HcRR38x",
"g_lsCN6Gvxy",
"RHNPmV5Ydx1",
"EfKNTQ_86q0",
"Otam2dF5op3",
"-NpHci-nT5",
"_MO-8-hcQGO",
"nips_2022_ZidkM5b92G",
"nips_2022_ZidkM5b92G",
"nips_2022_ZidkM5b92G"
] |
nips_2022_JvIFpZOjLF4 | Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction | Recently, neural implicit surfaces learning by volume rendering has become popular for multi-view reconstruction. However, one key challenge remains: existing approaches lack explicit multi-view geometry constraints, hence usually fail to generate geometry-consistent surface reconstruction. To address this challenge, we propose geometry-consistent neural implicit surfaces learning for multi-view reconstruction. We theoretically analyze that there exists a gap between the volume rendering integral and point-based signed distance function (SDF) modeling. To bridge this gap, we directly locate the zero-level set of SDF networks and explicitly perform multi-view geometry optimization by leveraging the sparse geometry from structure from motion (SFM) and photometric consistency in multi-view stereo. This makes our SDF optimization unbiased and allows the multi-view geometry constraints to focus on the true surface optimization. Extensive experiments show that our proposed method achieves high-quality surface reconstruction in both complex thin structures and large smooth regions, thus outperforming the state-of-the-arts by a large margin. | Accept | This paper introduces new and useful losses, presents a good experimental setup, supply analysis on the bias, and is clearly written. I encourage the authors to discuss similarities and differences to NeuraWarp and pointcloud->SDF methods in their revision. | train | [
"alZYd-1Ivn",
"LUXzpNG4YEB",
"HOny6jzdia",
"ifv8WuRcqie",
"r_OIQsqXBOM",
"DNTWxA8ionRM",
"C36-1vsy-tD",
"yJ5RMieDgxJ",
"TxBxAKaOubw",
"i0pMhJoZHkZ",
"Uak6pSh69P",
"GidTo6MYlec"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the careful review and constructive discussions. We will revise our paper following the comments made by all the reviewers.",
" I would like to thank the authors for carefully addressing my concerns, especially w.r.t point2surf and feature generation from pointclouds. I have updated my recommendation. Thanks!",
" We are very grateful for your constructive suggestions and help in improving the paper! We will add the failure analysis and the references to the corresponding experiments in the final revision of our main paper.",
" Thank you very much for the rebuttal!\n\nI believe that my raised questions are addressed. I would, however, suggest to at least provide references to the main paper referencing to the added experiments for Q2 and Q3 in the supp mat. Further, as one additional content page is added for the final version, I believe adding the failure analysis of Q10 to the main paper would make the paper significantly stronger.\n\nThanks!",
" We thank the reviewer for the detailed comments and constructive suggestions. We first discuss current neural implicit surface reconstruction methods and then show our responses to the questions.\n\n**R1. Discussion on current implicit surface reconstruction methods.**\nCurrent implicit surface reconstruction methods can be categorized into three types.\n\n1.Reconstruction from scanned point clouds. Generally, these point clouds are uniformly scanned, thus representing relatively complete geometries. Some surface reconstruction methods, such as Points2Surf, ONet etc, can be used to reconstruct surfaces from those uniform and complete point clouds.\n\n2.Reconstruction by multi-view stereo(MVS) methods. These methods take captured images as input. They first use SFM to recover camera parameters and obtain sparse 3D points. Then, MVS methods are used to compute dense point clouds. At last, the aforementioned surface reconstruction methods are applied on these dense point clouds to reconstruct surfaces.\n\nIn the aforementioned methods, the uniform and complete point clouds need to be explicitly obtained for the subsequent surface reconstruction. It should be noted that, these methods are rarely applied on the sparse point cloud produced by SFM to reconstruct surfaces.\n\n3.Reconstruction from posed images by differentiable rendering. These methods are all based on NeRF framework. The NeRF-based methods, such as NeuS and Neuralwarp, also need to use SFM to compute camera parameters. However, they do not explicitly generate dense point clouds as MVS methods do. Based on the framework of differentiable rendering, they directly reconstruct surfaces from posed images with implicit representations, thus circumventing noises from point clouds.\n\nAs a NeRF-based method, Geo-Neus inherits advantages of differentiable rendering frameworks. Besides, we use sparse points from SFM as an explicit geometry supervision. Although sparse points cannot represent complete geometry structures of scenes, they provide useful geometry information of rich-textured areas. Thus, with proposed SDF supervision, our method can outperform the SOTAs by a large margin.\n\nIn general, Points2Surf uses either scanned point clouds like 1 or dense point clouds generated by MVS methods in 2 to reconstruct satisfactory surfaces. Indeed, Points2Surf can be also applied to the sparse points from SFM. However, these sparse points distribute very irregularly, which poses great challenges for existing reconstruction methods. Thus, the performance of Points2Surf degrades when handling sparse point clouds from SFM. The corresponding results are shown in our supplementary material Sec. B.\n\n**R1-Q1. Missing references of some SDF methods.**\nThanks for your kind suggestions. We will add the missing references in our revision. As mentioned in R1, we also use Points2Surf to reconstruct surfaces from point clouds from SFM. The results show that the performance of Points2Surf degrades in this special case.\n\n**R1-Q2. Minor issues in Table 1.**\nThanks for pointing out these issues. We have fixed them in the revision accordingly.\n\n**R1-Q3. Reliance on SFM pipelines and the role of MLP.**\nThanks for your detailed comments. SFM is a key step in image-based 3d reconstruction methods. It is a routine to use SFM to compute camera parameters as inputs to the later process by MVS methods. Recently popular NeRF-based methods also use SFM to compute camera parameters. If the camera parameters obtained by SFM are inaccurate, it is hard for image-based methods to reconstruct promising surfaces. In fact, the SFM pipeline performs well in most reconstruction cases and thus makes the image-based methods widely used in practical 3d reconstruction tasks. Based on the frameworks of differentiable rendering, NeRF-based methods use MLPs to fit the color field and geometry field in 3d space. Built on this framework, we introduce a SDF loss with SFM points to help the SDF network to fit the geometry of objects better.\n\n**R1-Q4. Generating features from 3D points.**\nThanks for your constructive suggestion about generating features from 3D points. Existing neural reconstruction methods are all based on NeRF and use the MLPs to fit the color field and the geometry field(such as SDF field). In our method, we use the sparse points in a geometric loss to directly supervise the SDF network instead of extracting features from them. We also tried to extract features from the sparse points by PointNet and made the SDF network conditioned on these features but got little improvement. Maybe the features are not representative because of the irregular neighbor regions caused by the extremely non-uniform distribution of sparse points. We are also very interested in exploring more geometric features from 3d points to help with the neural reconstruction pipeline for better reconstruction. We have added other baseline methods, such as Points2Surf and DVR, in our supplementary material Sec. B.",
" We thank the reviewer for the detailed comments and constructive suggestions. Below are our responses to the questions.\n\n**R2-Q1. Comparison with NeuralWarp.** Thanks for the helpful reminder of discussion between Geo-Neus and NeuralWarp. We will add the discussion in our revised paper. \nNeuralWarp is an exploration of the use of patch-match on neural surface reconstruction. It combines volumetric rendering with a patch warping integration technique, which aggregates colors from points sampled along the camera ray from source views with patch warping. This way of patch aggregation is similar with volume rendering and shares the same sampled points and the same weights with those used by color integration. Note that NeuralWarp uses patch match with the color aggregation to optimize weights of samples points, and thus to optimize the geometry indirectly. As we analyze in Sec.3.1, this kind of color integration operation will cause bias in the colors and the geometry. Therefore, NeuralWarp could not be trained from scratch and relies on the pre-trained model of VolSDF. \nConsidering the bias analyzed in sec.3.1, we propose multi-view photometric consistency supervision to directly optimize the geometry represented by the SDF network. We locate the predicted surface of the SDF network using SDF-based interpolation and use patch match to measure the photometric consistency among neighboring views. In this way, Geo-Neus can be trained from scratch and get better performance. We will add these related discussions in our revision.\n\n**R2-Q2. Noisy SFM points.** In our work, we use a radius filter with strict parameters to remove sparse points with large errors. This reduces the influence of sparse points with large errors as much as possible. Besides, existing SFM techniques could reconstruct sparse points with pixel-level reprojection errors, which to some extent guarantees the accuracy of sparse points. Moreover, our method also leverages the color rendering loss and multi-view photometric consistency loss to implicitly/explicitly supervise the SDF network. This also guarantees our reconstruction quality. For further exploration, we use raw points obtained by SFM to do the ablation study on 3 scans of DTU (scan 24, 37 and 40), and get the accuracy 0.42, 0.63 and 0.34 respectively. It can be seen, our methods could also perform well with noisy sparse points directly from SFM. This shows the robustness of our method. Details of the ablation study can be found in our revised supplementary material Sec. C.3.\n\n**R2-Q3. The error of extracted surface points by linear interpolation and hierarchical sampling.** Thanks for your constructive suggestions. We replace the linear interpolation by the hierarchical sampling to extract surface points and retrain our model on DTU scan24 and scan37. The quantitative results by the hierarchical sampling are 0.537 and 0.677, which is worse than those of the linear interpolation (0.375 and 0.537). This further validates the gap between the volume rendering and SDF modeling, supporting our assumption and theoretical analysis. We have added this experiment in our supplementary material Sec. C.4.\n\n**R2-Q4. Qualitative comparison with NeuralWarp.** Thanks for your suggestion about qualitative comparison with NeuralWarp. We have added the qualitative comparison with NeuralWarp in our supplementary material Sec. E.2.\n\n**R2-Q5. Time consumption and accuracy with fewer iterations.** We find the time consumption problem may be caused by different cpus we use. We reevaluated the time consumption on scan 24 on our own device and found the time consumption of NeuS is about 14h 48min while Geo-Neus gets about 16h 14min. For accuracy, Geo-Neus could get 0.57 with 150k iterations on average and 0.54 with 200k. NeuS gets 0.94 and 0.89 respectively.\n\n**R2-Q6. Limitation and Failure cases.** Our assumption in sec.3.1 is that the target objects are all opaque and solid. For transparent objects, the bias we talk about is invalid and the proposed supervision may not work well, just like the photometric consistency used in traditional reconstruction methods. For further exploration, we do experiments on glass bottles of Dex-Nerf dataset[C1] to test the performance of our method on transparent objects. The results validate our above analysis. We have added the related discussions in our supplementary Sec. F.2. \n[C1] Dex-NeRF: Using a Neural Radiance field to Grasp Transparent Objects, Jeffrey Ichnowski*, Yahav Avigal*, Justin Kerr, and Ken Goldberg, Conference on Robot Learning (CoRL), 2020",
" We thank the reviewer for the detailed comments and constructive suggestions. Below are our responses to the questions.\n\n**R3-Q1. Evaluation of the bias with 2 proposed losses.** In sec.3.1, we analyze the bias between color rendering and implicit geometry, which indicates that it is unreliable to depend solely on rendering loss to reconstruct accurate surfaces. That motivates us to introduce SFM points and photometric consistency to supervise the geometry explicitly. We try to evaluate the bias with 2 proposed losses and details can be found in our supplementary material Sec. G. With the help of proposed losses, the network could better simulate the real color field. But the sample bias and the weight bias still exist because of volume rendering. Our experiments show that the rendering quality of Geo-Neus is not improved compared with Neus owing to the integration effect of volume rendering. The predicted colors of surface (where the predicted SDF is 0) w/ and w/o the proposed losses are shown in Fig.1(b) in our paper. The geometric losses we propose could relieve the bias between color rendering and the implicit surface. Theoretical quantitative analysis of the bias is a very meaningful future work.\n\n**R3-Q2. Comparison with NeuralWarp and MVCGAN.** The difference between our method and NeuralWarp is shown in R2-Q1. MVCGAN adopts depth integral to represent surface points and then enforces multi-view geometry constraints on these surface points. As we discussed in Sec. 3.1, this introduces bias for true geometry modeling. To address this problem, we directly locate the zero-level set of SDF networks to represent the surface points and enforce explicit geometry constraints on these surface points. The experiments in geometry bias of volumetric integration verify the existence of this bias and show that our design can address this problem and achieve much better results.\n\n**R3-Q3. The selection of the reference view and source views and the visibility of intersection points.** The currently rendered image is selected as the reference image. Based on the sparse 3D points, we follow COLMAP to compute the total number of sparse 3D points observed by each image pair and their corresponding triangulation angles. When an image pair has more than 75\\% of these triangulation angles below $5^{\\circ}$, we remove this image pair. At last, we select the top 9 source views in terms of the number of co-visible 3D points. Since the currently rendered image is selected as the reference image, Eq. 19 shows that we select the nearest intersection points to represent the surface points for the reference image. This guarantees the intersection points are visible for the reference image. However, these intersection points may be not visible for all source views. To handle occlusions, we follow [9] to find the best four NCC scores to compute the photometric consistency loss.\n\n**R3-Q4. How to reconstruct the 3D scene without background?** Following the previous practices of IDR, VolSDF, NeuS and NeuralWarp, we use the visual hull of objects (defined by the segmentation masks of IDR) to reconstruct the 3D scene without background. Also, for fair comparison with previous neural reconstruction methods, we only evaluate the reconstruction inside the visual hull.\n\n**R3-Q5. Clarification of the conversion from SDF values to ray sample weights $w$.** Following NeuS, $w_i$ is computed as $w_i=T_i\\alpha_i$, where $T_i=\\prod_{j=1}^{i-1}(1-\\alpha_j)$. $\\alpha_j=\\text{max}(\\frac{\\Phi_s(sdf({\\textbf {\\textit {p}}}_i))-\\Phi_s(sdf({\\textbf {\\textit p}}_i+1))}{\\Phi_s(sdf({\\textbf {\\textit {p}}}_i))},0)$. $\\phi_s(x)=(1+e^{-sx})^{-1}$ is a Sigmoid function, where $s$ is a learnable parameter which controls the smoothness of the transition at the surface. We will add these details in our revision.\n\n**R3-Q6. The performance under sparse input views.** Thanks for your constructive suggestions. We select 3 input views from scan 97, 106, 118 of DTU and train NeuS and our model from scratch. The Chamfer distances of our method on these three scans are 1.045, 0.782 and 0.855 respectively, which are better than those of NeuS, 1.74, 1.85 and 3.59. Visualization results are shown in our supplementary material Sec. F.1. This shows that our method can still reconstruct satisfactory surfaces in this case, demonstrating the effectiveness of our proposed method.",
" We thank the reviewer for the detailed comments and constructive suggestions. Below are our responses to the questions. \n\n**R4-Q1. Quantitative evaluation on BlendedMVS.** Unlike DTU which provides accurate point clouds obtained by 3D scanner as groundtruth for evaluation, there are only meshes constructed by MVS pipelines from images in BlendedMVS. Following NeuS and UNISURF, we just provide the qualitative results on BlendedMVS.\n\n**R4-Q2. SDF loss by random sampling from sparse 3D points.** We use visible points of SFM points to supervise when rendering a specific view, the aim of which is to make the geometry supervision consistent with the process of color rendering. This consistency make the optimization of color loss and SDF loss aim on the same region of surface, which provides a mutual guarantee. For further exploration, we do the ablation study on scan24 and scan37 of DTU with random sampled points from the sparse points. The number of sampled points is set to be the average of points in our experiments. The reconstruction results with random sampling get 0.43 and 0.58, which are worse than those with view-aware sampling (0.38 and 0.54). This validates the effectiveness of view-aware sampling. We have added this experiment in our supplementary material Sec. C.5.\n\n**R4-Q3. Photometric consistency loss by RGB images.** We use the grey-scale images for less time and memory consumption. On a on NVIDIA 2080TI, the training time with grey-scale images is about 16h while that with RGB images is about 24h. The extra GPU memory consumption is 0.6G and 1.1G respectively. We do the ablation study on DTU scan24 to compare photometric consistency effect with grey-scale images and RGB images. It shows that the network degrades with RGB images (RGB: 0.44 vs. Grey-scale: 0.38). We suppose that gray images may reflect more geometric information. We have add this experiment in our supplementary materail Sec. C.6.\n\n**R4-Q4. Other measures for photometric consistency loss.** We try to use SSIM to compute the photometric loss. The result becomes 0.408 on scan 24, which is a little worse than that of using NCC, 0.375. In COLMAP, the authors use an improved version of NCC, bilateral weighted NCC. Due to the time limit, we will explore this in the future. Related discussions have been added in our supplementary material Sec. C.7.\n\n**R4-Q5. How to determine the patch size of $11\\times11$?** Traditional MVS methods, such as COLMAP [28], Gipuma [9], ACMM [34], generally use the patch size of $11\\times11$ to compute patch similarity. Following these practices, we also use this patch size to compute NCC scores.\n\n**R4-Q6. Details of volumetric integration.** Thanks for your helpful reminder. The expected depth maps we use are obtained by the depth integration, which is similar with color integration. In volume rendering, the expected color is calculated as: $\\hat{C}= \\sum_{i=1}^n{w\\left( t_i \\right) \\hat{c}\\left( t_i \\right)}$. Similarly, we calculate the expected depth $\\hat{d}$ as:\n$\\hat{d}= \\sum_{i=1}^n{w\\left( t_i \\right) d\\left( t_i \\right)}$. We have add these details in our supplementary materials Sec. A.\n\n**R4-Q7. Why is NeRF's rendering quality worse than NeuS/Geo-Neus?** NeuS and Geo-Neus use another rendering network, NeRF++, to model background. This enhances their rendering capability. VolSDF does not use anoher network to model background, thus achieving similar PSNR to NeRF. This makes the rendering quality of NeuS and Geo-Neus better than that of NeRF and VolSDF.\n\n**R4-Q8. Which situations do the proposed losses help in particular?** In our experiments, we find homogeneous areas are particularly improved, such as scan40 in DTU, stone in BlendedMVS and etc. Besides, areas with occlusions are also improved a lot, such scan 37 in DTU, dog in BlendedMVS and etc.\n\n**R4-Q9. Missing references and typos.** Thank you for pointing out these issues. We have added the missing references and fix the typos in the revision accordingly.\n\n**R4-Q10. Limitations.** Thank you for your constructive suggestions. For scene with strong specular highlights and transparent objects, our method degrades in these situations. Visualization results are shown in our supplementary materail Sec. F.2. In addition, we also explore the potential of our method with sparse input views. We find that our method can still achieve satisfactory results (including geometry and rendering) in this case while NeuS degrades a lot. This shows the superiority of our proposed method. More details have been added in our supplementary material Sec. F.1.",
" Implicit representation has become a popular technique for 3D scene reconstruction. Existing methods do not utilize the multi-view geometry constraints in the learning. In this paper, by leveraging sparse geometry from structure-from-motion (SFM) and photometric constraints in multi-view stereo. \n \nThe paper compares the proposed method with both classical (colmap [28]) and recent deep learning baselines (IDR [40], VolSDF [39], NeuS [33], and NeuralWarp [7]).\n Strengths\n\n1) The paper builds on top of other methods such as IDF [40] in handling multi-view geometry constraints while learning the model. \n\nWeaknesses\n\n1) Some of the top-performing SDF methods from poinclouds are not cited. Using the pointclouds from SfM methods, we can generate implicit 3D representations using methods such as points2surf:\n\nErler et al. Points2Surf: Learning Implicit Surfaces from Point Cloud Patches, ECCV 2020.\n\n2) The chamfer distance shown in Table 1 has minor issues. For example, the proposed method is not the best performing method on scan 83. NeuS achieves better results. \n\n3) The paper heavily relies on existing structure-from-motion pipelines [51, 47, 22, 19] to compute the camera parameters. In some sense, the MLP networks are essentially utilized to further refine the multi-view geometry constraints. Furthermore, if the classical methods fail on a scene it would be hard to use this method. The improvement in the results need to be explained more carefully since the geometry constraints are already implicitly incorporated while using the structure-from-motion pipelines. The proposed method uses an 8-layer MLP for generating features from 3d points. We can also use a permutation-invariant method such as pointNet to generate features from 3D points. All the baselines are somewhat constrained to IDF-style algorithm. It would be good to also consider other baselines like Point2Surf or DVR in addition to the baselines shown:\n The paper clearly mentions the limitations, especially the challenge in improving computationally efficiency. ",
" This paper introduces a new method for geometry-consistent neural implicit surfaces learning. The proposed method includes a theoretical analysis of the gap between volume rendering and point-based SDF modeling and a solution by leveraging sparse points from SfM and utilizing patchmatch to provide geometry consistent supervision. This method is evaluated on DTU and BlendedMVS and achieves improvements compared with existing methods. In this paper, the main contribution is incorporating depth supervision (from sparse points) and patch match supervision into NeuS’s optimization framework, which leads to better reconstruction results. It seems that the improvement mainly comes from integrating patch match, shown in Tab. 2, which has been demonstrated by NeuralWarp. So the discussion about these two methods is expected. \n\nThe paper is well organized, and the evaluations are sufficient to support the proposed method. 1. What’s the key difference between NeuraWarp and Geo-NeuS? Both methods adopted patch-match based optimization for better geometry reconstruction. The authors should provide a detailed discussion between these two methods and show their strengths, especially in the introduction and related works. \n\n2. Sparse points from SfM are utilized to supervise the SDF network, where the sparse points with a radius filter are supposed on the surface (line 162). However, in practice, the sparse points from SfM are usually noisy, and wrongly incorporating these points with large errors may decrease the reconstruction quality. \n\n3. The authors claim that discrete sampling can cause bias (line 138) and linear interpolation is adopted to get surface points (line 195). What’s the error of the extracted surface points by these two methods, i.e., the proposed method of Geo-NeuS and the hierarchical sampling method in NeuS? This experiment may support the author’s assumption and theoretical analysis. \n\n4. The authors are encouraged to conduct a qualitative comparison between NeuralWarp and Geo-NeuS. \n\n5. The training time is 16h with 300k iterations on NVIDIA 2080TI (line 259) while the training time of NeuS reported in its original paper on NVIDIA 2080TI is also about 16h. However, Geo-NeuS requires extra computation for depth and patch match supervision. I’m not sure whether this number is correct. Besides, what about the accuracy with fewer iterations, as discussed in line 313, such as 150k or 200k iterations? \n\n Limitations or failure cases are not discussed in this paper. ",
" This paper presents neural implicit surfaces learning by enforcing explicit SDF constraints. It analyzed the gap between volume rendering integration and implicit SDF learning. Based on that, it proposes an explicit surface point SDF supervision where the surface points are estimated with SFM. in addition, it locates the zero-level set of SDF networks and a multiview photometric consistency loss is proposed to explicitly supervise the training of SDF networks. The results demonstrate high quality 3d scene geometry reconstruction, even on thin structures and large flat regions. - The paper is well organized and written. \n- Comprehensive experiments demonstrate the geometry improvements brought by the proposed 2 losses.\n- It provides theoretical analysis for the geometry bias, although I would like to see an evaluation of the bias after introducing the 2 losses (see details below)\n- Such warping patch losses have been explored in recent papers, such as NeuralWarp and MVCGAN, CVPR 2022 - For the multiview photometric consistency loss, it is not clear to me how the reference view is picked w.r.t. the source view. Is it pure random or within a perturbation range? Eq. 19 determines the ray intersection with the implicit surface. However, it seems to me it does not guarantee the intersection point will be visible for all the reference views. If the point occluded in the reference view, will the multiview constraint still be valid? \n\n- It is also not clear to me why this approach does not require foreground mask while still being able to reconstruct the 3d scene without background. \n\n- Please clarify how the ray sample weights w are converted from SDF values. \n\n- Both the SDF loss and the multiview consistency loss seem reasonable to help regularize the geometry learning. However, it is not clear to me how they are directly linked to the motivation given in sec 3.1. Will it be possible to evaluate the bias w/wo the proposed losses? \n - The multiview photometric consistency loss does not consider the view dependent effects. it remains questionable for reflective or specular objects. \n\n- It will be interesting to show the effect of number of reference cameras on the rendering quality. On one hand, with less reference views, the SFM might produce noisy result and affect the SDF guidance loss. It is not clear so far how the quality of SFM points affect the 3d scene reconstruction. On the other hand, multiview consistency loss might help regularize the neural field distribution and compared to baseline models, it might still converge with sparser training cameras. ",
" The authors of this manuscript propose Geo-NeuS, a novel neural implicit 3D reconstruction method.\nThey combine a SDF-based neural implicit surface representation, that is usually only optimized with the reconstruction loss, with explicit multi-view constraints.\nMore specifically, they incorporate supervision in the form of i) the sparse point cloud obtained from SfM, ii) photometric consistency from classic multi-view stereo.\nThey first theoretically analyze the difference of volume rendering and surface rendering-based approaches wrt. 3D reconstruction, and then show experimentally that adding the proposed constraints leads to better 3D reconstruction and more accurate surfaces. # Strengths\n\n1.) The proposed method clearly improves quantitatively (Tab 1) as well as qualitatively (Fig 4) over the state-of-the-art.\n\n2.) The proposed additional losses are well-grounded, not too complicated in general, and clearly have benefits for the task of 3D reconstruction (Tab. 2)\n\n3.) The ablation study (Tab. 3) of applying the proposed losses either via expected depth maps or via surface points found via root finding is very interesting. \n\n4.) While the findings of the theoretical analysis of volume vs surface rendering approaches for 3D reconstruction (Sec. 3.1) are expected, it is still valuable to have it.\n\n5.) The manuscript is written clearly, has a good structure, and experimental evaluation is enough to show the benefits of the proposed system.\n\n# Weaknesses\n\n1.) BlendedMVS is not evaluated quantitatively, and I couldn't find an argument for this.\n\n2.) It is not clear to me why the explicit SDF supervision (Sec. 3.2) is done with occlusion handling (L. 165) and view-aware (L. 173). It is only stated that \"the introduced SDF loss is consistent with the process of color rendering\" (L. 177 - 178). Instead, in every iteration, a subset of the sparse point cloud could be sampled and the loss can be applied on the 3D location without any occlusion reasoning etc. which seems simpler and more straight-forward. I believe a good reason / ablation study for this complicated setup is missing\n\n3.) The described process of finding the surface intersection (L. 197ff) is very similar to proposed root-finding methods from ray marching-based approaches for neural implicit surfaces like [23] and a short note on this+citation on this would be helpful for the reader.\n\n4.) The fact that the photometric consistency loss is only applied on grey-scale images (L. 211ff) is interesting, and an ablation study on this would be helpful.\n\n5.) NCC is used as the phometric consistency metric. Have the authors investigated other measures as well? This could be an interesting ablation study (but not a must in this manuscript).\n\n6.) It is not clear how the patch size of 11x11 (L. 227) was determined.\n\n7.) The fact that Colmap's best trim parameter is 7 (L. 253) should be cited, e.g. [23]. \n\n8.) The visual ablation study (Fig 5) could be bigger with zoom-in windows to better see the differences, similar to Fig 1 of Supp Mat.\n\n9.) Table 3 / \"Geometry bias of volumetric integration\": very interesting, but details are missing. Are here the expected depth maps used obtained via volume rendering? I think at least the supp mat should contain relevant formulas how the quantities are obtained.\n\n10.) Appendix C: Why is NeRF's rendering quality worse than NeuS / Geo NeuS?\n\n11.) Would be interesting to further discuss or which situations the losses help in particular, e.g. mostly for specular areas?\n\n12.) Fig 4 caption typo: NueS -> NeuS\n 1.) Why is BlendedMVS not evaluated quantitatively?\n\n2.) Why is the explicit SDF supervision (Sec. 3.2) implemented with occlusion handing and view-aware, instead of a simple loss via sampling in 3D space?\n\n3.) Why are grey-scale images used for the photoconsistency loss? Could an ablation study help to support this design choice?\n\n4.) Why did the authors use a patch size of 11x11?\n\n5.) Where / For which type of surfaces do the proposed losses help in particular (e.g. specular surfaces, homogeneous areas, etc)? The authors discussion on limitations and negative societal impact (L. 331 - 334) is quite limited. The authors could tackle more complex datasets / more complex scenes if the proposed system does not fail on DTU and the selected BlendedMVS scenes. What happens if photoconsistency is not given, e.g., because of strong specular highlights? How does the model perform in the presence of transparent surfaces? How could the model handle sparse set of inputs instead of the considered dense coverage of scenes? These could be starting points for an interesting limitation discussion."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"LUXzpNG4YEB",
"r_OIQsqXBOM",
"ifv8WuRcqie",
"yJ5RMieDgxJ",
"TxBxAKaOubw",
"i0pMhJoZHkZ",
"Uak6pSh69P",
"GidTo6MYlec",
"nips_2022_JvIFpZOjLF4",
"nips_2022_JvIFpZOjLF4",
"nips_2022_JvIFpZOjLF4",
"nips_2022_JvIFpZOjLF4"
] |
nips_2022_JSBgIaxAXk9 | Differentially Private Linear Regression via Medians | Linear regression is one of the simplest machine learning tasks. Despite much work, differentially private linear regression still lacks effective algorithms.
We propose a new approach based on a multivariate extension of the Theil-Sen estimator.
The theoretical advantage of our approach is that we do not directly rely on noise addition, which requires bounding the sensitivity. Instead we compute differentially private medians as a subroutine, which are more robust.
We also show experimentally that our approach compares favourably to prior work. | Reject | Though the reviewers appreciate the contribution overall, and the application of median methods to regression for the purpose of avoiding/circumventing clipping is novel, the strength of the contributions remains limited in light of other existing work that achieves more favorable bounds and/or uses fewer assumptions. This is certainly the case for (unpublished) [VJT22], but it is also the case for [MKFI22], which also assumes Gaussian inputs but, crucially, does not rely on prior knowledge of the distribution's covariance. Moreover, in light of the effort required to perform DP covariance estimation via [KLSU19], it is not clear that, as stated in the rebuttal, an error proportional to condition number is non-trivial. For example, it is not clear that, allowing for such error, processes like [KLSU19], do not become trivial.
A reorganization of the paper, that brings related work earlier, and explains what is known regarding regression in the unbounded regime (especially in light of [VJT22],[MKFI22]) but also on DP for other settings in the unbounded regime, and where the current work is placed/what the contributions beyond these works are, would strengthen the paper.
| train | [
"LUJTNEPnKh",
"9w8SiwzWUkK",
"7gI41sbdrDa",
"dAHa_8uzUgs",
"RgrAqzP2A2l",
"vvGKmnCnA5i",
"Vp_toF_XdMS",
"l3Qw_2MxkKx",
"yA_d2z92QIm",
"khIXIC4UfA3",
"vw8fvwkWYW"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > 1. If the covariance is unknown, does the proposed algorithm have any error bounds guarantees?\n\nThanks for the great question! \n\nWe can modify Lemma A.6 (in a black-box manner) to work for a non-spherical design matrix. If the data comes from $N(0,\\Sigma)$ instead of $N(0,I)$, this is equivalent to replacing $X$ with $X \\Sigma^{1/2}$. So the Lemma must analyze $u^T \\Sigma^{-1/2} X^{-1} y$. Since $X$ is spherically symmetric, the only relevant quantity is the length of $u^T \\Sigma^{-1/2}$. Looking at the proof of Proposition A.2, the final error bound will simply scale linearly with the length of this vector.\n\nOne bound is $\\\\|u^T \\Sigma^{-1/2}\\\\|\\_2 \\le \\\\|u\\\\|\\_2 \\cdot \\\\|\\Sigma^{-1/2}\\\\|\\_{\\text{op}} = 1 / \\sqrt{\\lambda\\_{\\text{min}}(\\Sigma)}$. So the error of our algorithm can be bounded in terms of the smallest eigenvalue of the covariance.\n\nHowever, we can do a bit better: For the $i$-th coordinate, the error scales with $\\\\|u^T \\Sigma^{-1/2}\\\\|_2$ where $u=e_i$ is the $i$-th standard basis vector. If we look at the squared error summed over all the coordinates, this scales with $$\\sum_i^d \\\\|e\\_i^T \\Sigma^{-1/2}\\\\|\\_2^2 = \\sum_i^d e_i^T \\Sigma^{-1} e_i = \\mathsf{trace}(\\Sigma^{-1}) = \\\\|\\Sigma^{-1/2}\\\\|\\_{\\text{F}}^2 = \\sum\\_i^d 1/\\lambda_i(\\Sigma).$$\n\n**In summary, for an unknown covariance $\\Sigma$ of the Gaussian features, we can bound the infinity norm of the error in terms of $1 / \\sqrt{\\lambda\\_{\\text{min}}(\\Sigma)}$ and the 2-norm of the error in terms of $\\sqrt{\\sum\\_i^d 1/\\lambda\\_i(\\Sigma)}$.**\n\n> 2. If it does, do the bounds scale with condition number?\n\nThese quantities $1 / \\sqrt{\\lambda\\_{\\text{min}}(\\Sigma)}$ and $\\sqrt{\\sum\\_i^d 1/\\lambda\\_i(\\Sigma)}$ are not quite the condition number, but they are closely related.\n\nYou will note that if we double the covariance $\\Sigma \\mapsto 2\\Sigma$ this has no effect on the condition number, but it will reduce the error bounds above by a factor of $1/\\sqrt{2}$. This is the correct behaviour because, if we scale up the features while holding the noise scale $\\sigma$ fixed, then, intuitively, the ratio of signal to noise is increased and we can obtain more accurate estimates of the parameters.\n\nOf course, if we use excess risk or, equivalently, $\\\\|\\hat\\theta-\\theta^\\*\\\\|\\_\\Sigma$ (thanks for the correction -- this is not quite the Mahalanobis norm) as our metric then we avoid this scaling issue. We decided to give a guarantee in terms of $\\\\|\\hat\\theta-\\theta^\\*\\\\|\\_\\infty$ because it makes sense for our algorithm and, from a statistics perspective, estimating the parameters is the end goal.\n\nIt would be a very interesting question to obtain a practical linear regression algorithm whose error guarantees adapt to the covariance. To the best of our knowledge, even for the simpler task of mean estimation this is still unresolved. (Although, like for linear regression, we have theoretical results showing this is possible.)",
" I agree with the authors that the main contribution is to provide a practical DP linear regression algorithm. I understand that [VJT22] is publicly available after the NeurIPS deadline. I am just trying to provide more context and details for other reviewers from a theoretical perspective. Even if [VJT22] is more practical, this paper still provides a different way of looking at this problem. I want to emphasize that indeed the minimax error rates always depend on the data assumptions. However, knowing the covariance or not can be a critical assumption in practice from an algorithmic perspective. And this is not relevant to the error rate no matter if it would scale with the condition number. From an algorithmic perspective, the prior works I listed do not need to know any information about the covariance, and the error bound does not scale with the condition number of covariance. However, this paper requires knowing the covariance of the distribution to get such error bounds. (Identity covariance is equivalent to knowing the covariance because you can whiten the data.) I am curious about this because there is a similar approach by [Dep20] for heavy-tailed regression with sub-gaussian rate. The connection is that both algorithms are using Median of Mean for linear regression. The difference is how the median is computed. In [Dep20], covariance is needed because the way they compute median requires prior knowledge of covariance. But here I don't think the coordinate-wise median needs to know the covariance. There are two orthogonal questions. 1. If the covariance is unknown, does the proposed algorithm have any error bounds guarantees? 2. If it does, do the bounds scale with condition number? For the first question, as an educated guess, I think the proposed algorithm will still work. Because the infinity norm measures the error in each coordinate. I think both questions can be and should be justified theoretically. Currently, this paper claims in line 124 that covariance has to be known but without providing reasons why the algorithm would fail if the covariance is unknown. \n\nRemarks on the error metric and error rate: I do not think there is any standard in choosing which norm to use for linear regression. For example [RWY09] considers linear regression over $\\ell_q$ ball. But typically, for linear regression, our objective is to minimize $\\sum_i^n(y_i-\\theta^\\top x_i)^2$. It is not hard to show that this is equivalent to minimizing $\\||\\theta-\\theta^\\*\\||_\\Sigma$. (Minimizing excess risk is equivalent to minimizing $\\Sigma$-norm). Note that this is not mahalanobis distance. Because $\\||\\theta-\\theta^\\*\\||_\\Sigma:= \\||\\Sigma^{1/2}(\\theta-\\theta^\\*)\\||$. And Mahalanobis distance is $\\||\\Sigma^{-1/2}(\\theta-\\theta^\\*)\\||$. I would say $\\Sigma$-norm is more \"standard\" for linear regression. For the euclidean norm, I think gradient descent would also give you the optimal solution, $\\||\\theta-\\theta^\\*\\||=\\sigma\\sqrt{d/n}$. For the private setting, I guess if you use regression depth together with exponential mechanism, you can get $\\||\\theta-\\theta^\\*\\||\\lesssim \\frac{d\\sigma}{\\varepsilon n}$. This does not require any bound on the eigenspectrum of $\\Sigma$. But of course, this is inefficient.\n\n\n[RWY09] Minimax rates of estimation for high-dimensional linear regression over $\\ell\\_q $-balls by Raskutti, Garvesh and Wainwright, Martin J and Yu, Bin\n\n[Dep20] A spectral algorithm for robust regression with subgaussian rates by Jules Depersin\n",
" I thank the authors for the time spent on clearly answering all questions. \n\nI understand better now the improvement on this paper vs. clipping and it does seems like a bad range R is less harmful than a bad clipping rate. \n\nIt seems that several claims could be confirmed with either theoretical results with assumptions that are less restrictive, like data being Gaussian. Further, given the fact that there are some heuristics involved in the algorithm, claims on the success of heuristics should be supported on real or broader classes of distributions. ",
" Thank you for your helpful comments. We will definitely add detailed discussion about this quantitative comparison to the paper. We wish to emphasize that our techniques are quite different from the prior work, which we think is of independent interest, even if the asymptotic guarantees are suboptimal.\n\nA few remarks:\n\nThe non-private bound $\\\\|\\hat{\\theta}-\\theta^{\\*}\\\\|\\_{\\\\Sigma} \\\\lesssim \\\\sigma \\\\sqrt{\\\\frac{d}{n}}$ still depends on the data distribution in two ways: First, it is in terms of the Mahalanobis norm $\\\\|\\hat\\theta-\\theta^\\*\\\\|\\_\\Sigma = \\sqrt{(\\hat\\theta-\\theta^\\*)^T\\Sigma^{-1}(\\hat\\theta-\\theta^\\*)}$, which depends on the data distribution. To convert this bound into the standard Euclidean norm we must bound the eigenspectrum of $\\Sigma$.\nSecond, the $\\sigma$ on the right hand side also depends on the data distribution. \n\nRegarding the comparison to VJT22: That is a purely theoretical paper and there is no implementation available to compare against. In any case, that work appeared *after* the NeurIPS deadline. The authors shared a draft with us beforehand, but we did not have time to implement their algorithm, which would require significant optimization to make it practical.\n",
" I have read the other reviewers' comments and the authors' rebutal. Thanks to the authors for their feedback. Most of my concerns are addressed. Regarding Reviewer bc18's comments, I would like to provide more details on distributional assumptions and clarification on the theoretical guarantees. \n\n > Our algorithm does not require knowing the data distribution. We only analyzed isotropic Gaussian data, but the algorithm would work even for non-isotropic data. Of course, the performance might degrade. Note that this issue arises even in the non-private setting, as the covariance matrix could become ill-conditioned.\n\n 1. In the non-private setting, for sub-gaussian data without knowing any information about covariance, both gradient descent and OLS solution would give you the optimal error, which is $\\||\\hat{\\theta}-\\theta^\\*\\||_\\Sigma\\simeq \\sigma \\sqrt{\\frac{d}{n}}$. Note that this error does not scale with condition number $\\kappa$ under $\\Sigma$-norm. In the non-private setting, even if the covariance is ill-conditioned, the performance will not necessarily degrade.\n\n 2. In the private setting, we first consider the non-private error. For sub-gaussian/sub-exponential data without knowing the covariance, both [VJT22] and [LKO21] provide algorithms that achieve optimality, $\\||\\hat{\\theta}-\\theta^\\*\\||_\\Sigma\\simeq \\sigma \\sqrt{\\frac{d}{n}}$. For the private error, under the same settings, [VJT22] gives $\\||\\hat{\\theta}-\\theta^\\*\\||_\\Sigma \\lesssim \\sigma\\frac{\\kappa d}{\\varepsilon n}$. [LKO21] gives algorithm that achieves optimality, $\\||\\hat{\\theta}-\\theta^\\*\\||_\\Sigma \\lesssim \\sigma\\frac{d}{\\varepsilon n}$. This means in the private setting, even in the ill-conditioned case, knowing the covariance or condition number is not fundamentally necessary.\n\n 3. Reviewer bc18 has concerns about the distributional assumptions. I would like to provide more details here. For norm bounded data, [CWZ19] and [CWZ20] (listed below) achieve the optimality $\\||\\hat{\\theta}-\\theta^\\*\\||_\\Sigma \\simeq \\sigma\\sqrt{\\frac{d}{n}}+\\sigma\\frac{d}{\\varepsilon n}$ for linear regression and generalized linear model. [LKO21] achieve nearly optimal theoretical guarantees under hypercontractive distributions, sub-gaussian distributions, and even heteroscedastic settings ($x_i$ and regression noise are correlated). As discussed above, Gaussian assumption or known covariance is not fundamentally needed to obtain such error bounds. I agree that Gaussian is natural and standard and the assumption in this paper could be potentially relaxed. But theoretically, the proposed algorithm gives $\\||\\hat{\\theta}-\\theta^\\*\\||_\\infty \\lesssim \\sigma\\frac{d}{\\varepsilon n}$ for isotropic Gaussians, which is strictly weaker than both [LKO21] and [VJT22] under the same settings. Such theoretical comparisons with prior works are missed in the current version.\n\nAs an empirical paper, I agree that the proposed algorithm is more practical than [LKO21]. But in my opinion, whether it is more practical than [VJT22] is still debatable without further experiments.\n\n\n\n\nReferences: \n\n[CWZ19'] The Cost of Privacy: Optimal Rates of Convergence for Parameter Estimation with Differential Privacy by T. Tony Cai, Yichen Wang, Linjun Zhang.\n\n[CWZ20'] The Cost of Privacy in Generalized Linear Models: Algorithms and Minimax Lower Bounds by T. Tony Cai, Yichen Wang, Linjun Zhang.\n\n\n",
" We thank the reviewer for their feedback and suggestions. We respond to their comments:\n\n - *“Compared to prior works, for example [VJT22], this algorithm is not optimal.” “ In the experiments, this paper does not compare the algorithm with [VJT22], which shouldn't be too hard to implement.”*\n\nThis is correct. Our result for Gaussian data is not asymptotically optimal. We did not compare to VJT22 because we were not aware of it until late in the process (it only appeared publicly in mid July). There is no implementation available. In principle, we could implement it, but there are a lot of hyperparameters/design choices in that algorithm which would complicate any experimental evaluation.\n\n\n - *“This paper requires prior knowledge of the covariance of the Gaussian.\"*\n\nOur algorithm does not require knowing the data distribution. We only analyzed isotropic Gaussian data, but the algorithm would work even for non-isotropic data. Of course, the performance might degrade. Note that this issue arises even in the non-private setting, as the covariance matrix could become ill-conditioned.\n\n\n - *“Similar ideas have been used for mean estimation with sub-gaussian rates [1], DP mean estimation [2], linear regression[3]. It seems that, as a byproduct of the median of mean scheme, these types of algorithms could also provide provable robustness guarantees against corruption of the data. This is not discussed in this paper. Also, it would be better if the authors could add some related works in these topics.”*\n\nThank you for the suggestion, we will add some discussion of this connection and these related works. Our approach is inspired by robust statistics and we should further emphasize the connection.\n\n\n - *“Why is the size of each partition chosen as d?”*\n\nIn our experiments this gave the best performance. We tried larger size (e.g., 2d). Interestingly, we can prove better theoretical results for larger size partitions (2d).\n\nThere is a tradeoff here – larger partitions mean each estimate is more accurate, but there are fewer partitions for the private median algorithm to work with. Empirically the error of the private median algorithm dominates so having more partitions is a win. Asymptotically doubling the partition size to 2d is only a constant factor loss (which is hidden by the big O notation), but we can improve the per-partition accuracy by a poly(d) factor. So the asymptotic results point to a different setting of parameters vis a vis the empirical results.\n\n\n - *“Theorem 1 runs algorithm 1.1 with $\\ell=1$. How does different choices of $\\ell$ affect the utility guarantees?”*\n\nTheoretically we can analyze larger $\\ell$; it makes the proof slightly more complicated. It gives the same final guarantee, so there is no win here. (The number of samples we feed to the exponential mechanism increases by a factor of $\\ell$, but this is balanced out by the sensitivity increasing by the same factor. The non-private error bound also doesn’t improve because although we have more samples they are not independent.) Empirically larger $\\ell$ yields slightly better results.\n",
" We thank the reviewer for their time and comments. We respond to the main points:\n\n - *“Missing relevant recent work on DP-medians/quantiles.”*\n\nAre you referring to Kaplan, Schnapp, & Stemmer (ICML 2022)? We will add this reference. In our application we are only looking for the median, rather than multiple quantiles, so it doesn’t seem like this paper would improve over the exponential mechanism. But multiple quantiles would be relevant if, e.g., we want to output confidence intervals for the coefficients.\n\n - *“Experiments only test on synthetic data that meets the assumption. However, finding a reasonable r (even in small settings) could be hard.” “How to compute range R in real applications? if R is set too large, then the exponential mechanism will add too much mass to small depth points, and if it is too small, the range may not even contain the real model.”*\n\nThe advantage of our method is that it is very insensitive to the parameter $r$/$\\mathcal{R}$. Asymptotically the dependence is only logarithmic. In practice, extending $\\mathcal{R}$ has negligible impact because the probability of outputting a point far outside the true range is exponentially small.\n\n - *“How does the restriction in line 80 compares to clipping?”*\n\nThe paper cited on line 80 shows that the number of samples must grow with the iterated logarithm of our parameter $r$. This is an *extremely* slow-growing function and practically constant. But it shows that we cannot ignore this parameter entirely.\n\nWe use the exponential mechanism because it is practical. But it is asymptotically suboptimal. There are algorithms whose asymptotic sample complexity is polynomial in the iterated logarithm of $|\\mathcal{R}|$, but these algorithms are far from practical.\n\nIn contrast, clipping introduces a harsh privacy-utility tradeoff. If the clipping bounds are loose by a factor of 2, then we introduce twice as much noise as necessary. That is, the error grows linearly with the clipping, but our algorithm only has a logarithmic dependence on the range. On the other hand, if the clipping bound is too tight, this distorts the data; the effects of this on the final output are not fully understood.\n\n - *“Why is $\\sigma$ described as a parameter? I don’t see it used for any pre-processing or actual algorithm step.”*\n\nIndeed, $\\sigma$ is not a parameter of the algorithm. If we described it as a parameter, that should be corrected.\n\n - *“Can you discuss the choice of hyperparameters for DP-GD?”*\n\nOur version of DP-GD regressor has three hyperparameters: clipping norm, number of epochs, and learning rate. For the clipping norm we used the lipshitz constant of the gradient (it is known since the feature distribution is bounded) and made some attempt to manually find the best number of epochs and learning rate. In the plots we use learning rate equal to 0.1 and the number of epochs equal to 100.\n\n - *“Can you discuss the results in figure 5? why does the Widened Exponential from [AMSSV22] degrade in lower dimensions?”*\n\nAccording to our experiment, there is no degradation of the Widened Exponential in lower dimensions. Rather when the number of samples is not sufficient both algorithms are performing bad and for higher dimensions we need more samples, so the cases where the regular exponential mechanism start performing better is not shown on the figure. \n\n - *“The paper motivates well the use of a \"high dimensional private median\" for DP linear regression. However, it is not clear how this will overcome the challenges that current methods face, like clipping, since setting a range R is basically finding a clipping rate.”*\n\nThe reviewer is correct that our algorithm does not entirely escape the need to bound the range of the data; indeed, it is impossible to completely avoid bounding the range of the data. However, our methods improve the dependence on this parameter from linear to logarithmic, which we consider to be a significant improvement both in theory and practice.\n\n - *“Step 15 for calculating the median could result in a point outside the closure of datapoints.”*\n\nIndeed, taking a coordinate-wise median is not ideal. Ideally, we would compute something like a Tukey median, but this is difficult to compute non-privately, yet alone with DP. Our design choice here is motivated by practicality. The exponential mechanism is also not asymptotically optimal, but it seems to be the most practical method for computing medians.\n\n - *“Experimental section is based on synthetic data that follows the assumptions on theorem and thus provides guidance on how to select the parameters, however, realistic datasets may behave very differently.”*\n\nWe are working to add results with real datasets. Realistic datasets tend to be heavier-tailed than synthetic datasets, which is actually an advantage of our approach compared to clipping.",
" We thank the reviewer for their time and feedback. We respond to the main points:\n - *“The main (and only) theoretical result in the paper provides utility guarantees for the proposed algorithm only when the features and noise are Gaussian. This is a strong requirement on the data, especially given that previous algorithms don’t need this assumption as well.”*\n\nGaussian data is a very natural and standard assumption. The assumption could be relaxed – e.g., we could instead assume bounds on the norms of the design matrices and error vectors – but this would make the result harder to interpret without really providing additional insight.\n\nWhat kind of result would the reviewer like to see? If the reviewer has a particular data model in mind, we can try to analyze it.\n\nWe emphasize that some kind of assumption on the data is necessary to give these kinds of bounds, even non-privately. So it is not true that previous algorithms don’t have any assumptions, although it could take a different form such as bounding the condition number of the covariance matrix; but such a bound is best justified/explained via distributional assumptions.\n\n- *“Experiments: the experimental results in the paper don’t provide a convincing argument for their algorithms. First, all of the experiments are done over synthetic data.“*\n\nWe are working to add results with standard “real” datasets. If the reviewer has any suggestions for particular datasets that we should consider, those would be appreciated.\n\nWe used synthetic data because it provides an apples-to-apples comparison between methods, as we can ensure that the data is indeed bounded (i.e. no clipping required). In particular, the methods we compare to require a priori bounds on the data, and computing such bounds is a challenge in practice.\n\n\n - *“Moreover, the authors only consider low-dimensional datasets where d<30 and therefore it is not clear if the same improvements hold for high-dimensional problems.”*\n\nWe will extend the plots to higher dimensions. Is there a particular value of d that would be of interest?\n\n\n - *“Finally, it is not clear whether the authors used any hyper-parameter tuning for DP-GD (or DP-SGD); this could result in significantly better results for DP-GD. “*\n\nWe made some attempt to manually optimize the DP-GD parameters, but the reviewer is correct that there may be further room for improvement via an exhaustive hyperparameter search. We note that setting hyperparameters is a significant challenge in practice, which is a limitation of DP-GD.\n\nWe also remark that we used DP-GD instead of DP-SGD so that we can obtain tight privacy bounds via Gaussian DP, whereas DP-SGD requires subsampling which yields a suboptimal privacy-utility tradeoff. The downside of DP-GD is that it is quite slow, which was a major bottleneck in our experiments. \n\n\n - *“I encourage the authors to improve the writing in this paper. For example, the introduction could use more work on setting up the problem, stating the main results and comparing to previous work, before moving on to present the algorithm (which is done too soon in the current version).”*\n\nWe will work to improve our manuscript, in particular by adding further discussion of prior work. We are surprised by the reviewer’s comment that the algorithm is presented too soon; linear regression is such a well-known problem that we feel it needs little introduction. If there is anything that is unclear, we would appreciate this being pointed out.\n\n\n - *“First paragraph in page 4 has m. What is m? Should that be n?”*\n\nWe are not sure which paragraph the reviewer is referring to, but $m = \\lfloor n / d \\rfloor$ is the number of partitions (defined in Algorithm 1) and $n$ is the total number of samples.\n",
" The paper studies the problem of differentially private linear regression and develops a new algorithm for this problem based on privately calculating 1-dimensional medians. The algorithm is based on the Theil-Sen estimator and uses the exponential mechanism to privately estimate the medians required for this estimator. The authors present a utility guarantee for their algorithm with Gaussian features and Gaussian noise. Moreover, they provide some experiments over synthetic datasets that compare their algorithm to existing algorithms. The paper studies an important problem (private linear regression). However, I think the authors need to present more theoretical results, comparison to prior work, and experimental evidence to make this paper more compelling.\n\nWeaknesses\n1.\tThe main (and only) theoretical result in the paper provides utility guarantees for the proposed algorithm only when the features and noise are Gaussian. This is a strong requirement on the data, especially given that previous algorithms don’t need this assumption as well. Moreover, the authors should compare the rates achieved by their procedure to existing rates in the literature.\n2.\tExperiments: the experimental results in the paper don’t provide a convincing argument for their algorithms. First, all of the experiments are done over synthetic data. Moreover, the authors only consider low-dimensional datasets where d<30 and therefore it is not clear if the same improvements hold for high-dimensional problems. Finally, it is not clear whether the authors used any hyper-parameter tuning for DP-GD (or DP-SGD); this could result in significantly better results for DP-GD.\n3.\tWriting: I encourage the authors to improve the writing in this paper. For example, the introduction could use more work on setting up the problem, stating the main results and comparing to previous work, before moving on to present the algorithm (which is done too soon in the current version). \n\n\nMore:\n\n1.\tTypo (first sentence): “is a standard”\n2.\tFirst paragraph in page 4 has m. What is m? Should that be n?\n\n No Yes",
" This paper addresses the problem of differentially private linear regression. Specifically, current DP methods need access to the sensitivity of the query which is in practice hard to compute. Alternative methods clip the data, harming performance. This paper proposes a method that relies on privately computing a multidimensional “median”, a good estimator for the mean under certain assumptions and that has smaller sensitivity. *Strengths*\n- Nice analysis and intuition for Gaussian data.\n- Very clear and organized. \n- Good related work and comparison section.\n\n\n*Weaknesses*: \n- Missing relevant recent work on DP-medians/quantiles. \n- Experiments only test on synthetic data that meets the assumption. However, finding a reasonable r (even in small settings) could be hard. \n- Data complexity could be rather large, and prohibitive in real settings where linear regression datasets are rather small.\n- The high dimensional median proposed in step 15 may not even be an interior point in the convex closure of the data points. \n - How to compute range R in real applications? if R is set too large, then the exponential mechanism will add too much mass to small depth points, and if it is too small, the range may not even contain the real model. \n- How does the restriction in line 80 compares to clipping?\n- Why use the max instead of the min in the score function? Wouldn't this assign very high utility to a point in the boundary?\n- Why is $\\sigma$ described as a parameter? I don’t see it used for any pre-processing or actual algorithm step. \n- Can you discuss the choice of hyperparameters for DP-GD?\n- Can you discuss the results in figure 5? why does the Widened Exponential from [AMSSV22] degrade in lower dimensions?\n - The paper motivates well the use of a \"high dimensional private median\" for DP linear regression. However, it is not clear how this will overcome the challenges that current methods face, like clipping, since setting a range R is basically finding a clipping rate. \n- Step 15 for calculating the median could result in a point outside the closure of datapoints. A common example for this is having points (1,0,0), (0,1,0), and (0,0,1). Taking the median along each axis results in (0,0,0) which is outside the plane defined by the three points. \n- Experimental section is based on synthetic data that follows the assumptions on theorem and thus provides guidance on how to select the parameters, however, realistic datasets may behave very differently.\n- Experimental section only compares to one set of fixed parameters of DP-SGD. Perhaps following guidance from theory or discussing the choice of hyperparameters for DP-GD could justify this choice. \n\nIn general, the paper proposes an alternative approach but that still requires an equivalent to clipping, and this is not addressed neither theoretically nor empirically. ",
" This paper studies differentially private linear regression with known covariance for isotropic Gaussian data. The algorithm is based on the median-of-mean scheme and Theil-Sen estimator. It first splits the data into different partitions, then solves the solutions for each partition, and computes the univariate median for each coordinate of the solutions using the exponential mechanism. For approximate DP, the extra cost of privacy in terms of infinity norm is $O(d^{1.5}/\\varepsilon n)$, which has a $O(d^{0.5})$ gap to the lower bound. The primary advantage of this algorithm is that it does not need any clipping, which is more practical. Empirically, this paper provides extensive experiments and shows better performance for d=10, 20, 30 compared to a few baselines. Strengths: 1. This algorithm is simple and practical. \n2. The presentation of this paper is clear. The structure is easy to follow.\n\n\nLimitations: 1. Compared to prior works, for example [VJT22], this algorithm is not optimal. 1) Under the euclidian norm and gaussian data, the algorithm in [VJT22] has extra privacy cost $O(d/(\\varepsilon n))$. The extra cost here is $O(d^{1.5}/\\varepsilon n)$ in infinity norm. 2) This paper requires prior knowledge of the covariance of the Gaussian. 3) The loss is for infinity norm, which has $d^{0.5}$ gap to the euclidean norm. 4) In the experiments, this paper does not compare the algorithm with [VJT22], which shouldn't be too hard to implement.\n\n2. Similar ideas have been used for mean estimation with sub-gaussian rates [1], DP mean estimation [2], linear regression[3]. It seems that, as a byproduct of the median of mean scheme, these types of algorithms could also provide provable robustness guarantees against corruption of the data. This is not discussed in this paper. Also, it would be better if the authors could add some related works in these topics.\n\n\n[1] Hopkins, S. B. (2020). Mean estimation with sub-Gaussian rates in polynomial time. The Annals of Statistics, 48(2), 1193-1213. \n[2] Hopkins, S. B., Kamath, G., & Majid, M. (2022, June). Efficient mean estimation with pure differential privacy via a sum-of-squares exponential mechanism. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing (pp. 1406-1417).\n[3] Depersin, J. (2020). A spectral algorithm for robust regression with subgaussian rates. arXiv preprint arXiv:2007.06072.\n 1. Why is the size of each partition chosen as d?\n\n2. Theorem 1 runs algorithm 1.1 with $\\ell=1$. How does different choices of $\\ell$ affect the utility guarantees? This paper is theoretical and does not have a direct negative societal impact."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"9w8SiwzWUkK",
"dAHa_8uzUgs",
"Vp_toF_XdMS",
"RgrAqzP2A2l",
"vvGKmnCnA5i",
"vw8fvwkWYW",
"khIXIC4UfA3",
"yA_d2z92QIm",
"nips_2022_JSBgIaxAXk9",
"nips_2022_JSBgIaxAXk9",
"nips_2022_JSBgIaxAXk9"
] |
nips_2022_P6uZ7agiyCT | Sparse2Dense: Learning to Densify 3D Features to Boost 3D Object Detection | LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors. Yet, small, distant, and incomplete objects with sparse or few points are often hard to detect. We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. Specifically, we first train a dense point 3D detector (DDet) with a dense point cloud as input and design a sparse point 3D detector (SDet) with a regular point cloud as input. Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet. So, in inference, SDet can simulate dense 3D features from regular (sparse) point cloud inputs without requiring dense inputs. We evaluate our method on the large-scale Waymo Open Dataset and the Waymo Domain Adaptation Dataset, showing its high performance and efficiency over the state of the arts. | Accept | This paper proposes to utilize point cloud completion tools to densify sparse point clouds which could subsequently improve the performance of point cloud detection methods. After rebuttal, reviewers agree on the novelty of the method and its effectiveness on the Waymo open dataset. AC recommends this paper for acceptance following the unanimous opinion. | train | [
"Sac6WpJQB_m",
"gPzDqnpQamA",
"hvybdtcz-aH",
"ngp7Dsidgn",
"aTQySFUbBZt",
"nuito-YBZ4H",
"lYsG8ZFgAwy",
"58w803SWyX",
"k2nVnSah5Rs",
"cEebHkVF-n1"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the detailed reply. I think the paper is now above the acceptance bar and I increase my rating to weak accept.",
" Dear Reviewers: Thank you for your effort and time in reviewing our paper. We are encouraged to see positive ratings from all reviewers and your recognition of the good & novel design, interesting idea, effective & universal method, and solid validation. We will carefully revise the paper following the review comments, include the suggested references, and improve the clarity of the paper. Also, we will release our code to facilitate future research upon the publication of this work.",
" For S&W 2:\n* We think that Figure 1 presents a good example. The orange region in the figure is the distant region and each object contains a few points. Our method can still achieve an acceptable performance in this bad case.\n\nFor S&W 3:\n* To our knowledge, the upsampling only generates points near the original points and it can not recover the missing structure. Some papers try to upsample the whole frame first, but it generates a lot of insignificant points (like background points), or just complete the objects for the proposals and then refine the proposals [43], which is very time consuming. This is why we choose to densify the feature in the latent space by formulating a light-weight module.\n\nFor S&W 4:\n* Thanks for your suggestion! Actually, the Waymo Domain Adaptation dataset is much more challenging than the training dataset, which is the Waymo open dataset. It is because the data was captured in a different city in a rainy situation and point cloud is more sparse and incomplete. Yet, we will evaluate our models on some other datasets.\n\nFor S&W 5:\n* A lot of prior methods were trained and tested on KITTI, but KITTI cannot provide temporal point cloud sequences for generating the dense object. Also, most of them didn’t provide the code for training their model on waymo dataset. The methods we compared are already the recent state-of-the-art methods on 3D object detection with similar experimental sitting.\n",
" For Clarity 2:\n* Thanks for your careful review. The regress loss L_reg helps predict the height-above-ground, sub-voxel location refinement, 3D size and rotation for anchor-free-based methods following [1] and helps predict the centres and 3D size and rotation for anchor-based methods.\n\nFor Clarity 3:\n* We actually followed prior works [1, 3, 27, 28, 29] to set the grid cell size. We will include further details in the revision. The settings of grid size and detect range are based on the LiDAR sensor range and what size of BEV map that we target.\n\nFor Method 4:\n* Actually, we predict one offset for each cell, meaning that each cell contains a single point. The ground truth is the average position of the points in each cell.\n\nFor method 5:\n* Thanks for your valuable suggestion! We will try it!\n\nFor Experiments 3:\n* Here we show the full ablation results, and we will put the following table in the revision or supplementary material. \n| | Vehicle-L2 | Vehicle-L2 | Pedestrain-L2 | Pedestrain-L2 | Cyclist-L2 | Cyclist-L2 |\n|-------|:----------:|:-----:|:-------------:|:-----:|:----------:|:-----:|\n| | mAP | mAPH | mAP | mAPH | mAP | mAPH |\n| Base | 63.03 | 62.53 | 63.72 | 58.03 | 65.03 | 63.90 |\n| +dis | 63.84 | 63.32 | 67.04 | 61.21 | 67.59 | 66.44 |\n| +s2d | 65.75 | 65.22 | 67.62 | 61.65 | 68.50 | 67.34 |\n| +pcm | 66.12 | 65.58 | 67.47 | 61.59 | 68.69 | 67.54 |\n| - dis | 65.61 | 65.08 | 64.75 | 58.80 | 65.79 | 64.62 |\n\nFor Experiments 5:\n* We have evaluated the latency analysis on the whole validation set for CenterPoint baseline, the result is similar to table 8. We will update the table in the revision.\n| Detectors | CenterPoint | CenterPoint+S2D |\n|:-------------------:|:-----------:|:---------------:|\n| Inference time (ms) | 53.0 | 62.8 (+9.8) |\n\n",
" For W1:\n* Thanks for your suggestions. There are two main advantages of our method compared with [1]. First, [1] only uses five adjacent frames to generate the dense features as the guidance while our method generates a dense object from the whole data sequence. Hence, our method can better simulate the features even for objects that are further away from the sensor. Second, our method further designs the S2D and PCR to better densify the sparse features. We will add the suggested reference and discussion in the main paper.\n\nFor W2:\n* Thanks for your information, we will add this suggested reference in the main paper.\n\nFor W3:\n* Thanks for your suggestion. As we only have four RTX3090 GPUs, we need nearly a week to train one model on whole training data. Thus, we followed the strategy in OpenPCDet to use 20% training data + validation data. Note that this strategy was also adopted by many papers follow-up works [3, 27, 28, 29]. To evaluate the generalization ability, we further tested our models on the challenge dataset–Waymo Domain Adaptation dataset and the results demonstrate the generalization ability of our model over prior methods. We will be happy to also train our models on the full training data and show the results on the test set in the revised paper.\n\nFor Q2:\n* We actually used the function from Open3D, called “remove_radius_outlier”. We will make relevant clarifications in the paper.\n\nFor Q3:\n* Thanks for your careful review!\n\n",
" Thanks for your suggestions, we have visualized the feature map of S2D in Figure 1 and found that our S2D can learn the 3D dense geometry of the vehicles that are far from the camera. We also visualized the voxel mask and point offset in Figure 5 and found the PCR can really recover some 3D dense geometry information of cars, pedestrians and cyclists. We will provide more visualization in the revision. \n\nQ: Will the code be released after publication?\n- Yes, absolutely! We will release the code upon the publication of this work.\n",
" This paper aims to address object detection from LiDAR point clouds. The key idea is to train a densify network that can densify 3D features in a sparse point cloud so that they match the quality of the 3D features trained on dense point clouds. This densify network can be plugged into different object detection networks and enhance their performance - this is demonstrated in experiments.\n + The S2D network is a lightweight, plug and play network, that can enhance the performance of different object detection networks\n+ Lots of good design and engineering choices, e.g., voxelizing dense point cloud, design of S2D network, point cloud reconstruction network, etc\n+ The experiments are done thoroughly and show improvements on a number of different object detection networks\n\n- Lacks discussion of the reconstruction quality - it would help readers to understand if S2D only learns the feature in latent space, or actually starts to learn 3D dense geometry\n Will the code be released after publication?\n This is a well written paper. Good idea. Good execution and good experiments. I do not see strong limitations.\n",
" This paper proposes a knowledge-distillation approach to learning densified 3D features for outdoor 3D object detection. Concretely, the authors propose to train two networks DDNet and SDNet for 3D detection. During the training, DDNet will take densified 3D point cloud as inputs~(through multiple frame aggregation) while SDNet only takes single-frame Lidar point cloud. During the training, features computed from SDNet are matched with the corresponding features computed through DDNet at multiple different levels. This feature mimicking encourages the SDNet network to learn densified 3D features even with single frame input. The final model is evaluated on the Waymo 3D Detection and Domain adaptation dataset where the method considerably outperforms multiple popular baselines while maintaining similar latency. S1: The idea of learning densified 3D features is interesting and widely applicable to a wide range of 3D detection problems where the point cloud is sparse due to the physical constraints of the Lidar sensor. \n\nS2: The design of the feature matching process is novel. The feature mimicking or knowledge distillation is performed at multiple levels including latent features and raw point clouds~(through a point cloud completion task). \n\nS3: The method is effective and the experimental validation is solid. \n\nW1: Some important references and discussions with previous work are missing. For instance, [1] also proposed a knowledge distillation approach to learn densified features with single frame input. While it seems that the current method performs considerably better than [1], the authors need to add more detailed discussions to show the difference/improvements.\n\nW2: In section 3.4, the authors claim that they propose a novel voxel-level reconstruction scheme (occupancy + inner voxel offset regression). To my knowledge, this is already explored in [2]. Please add proper references. \n\nW3: All Waymo results are now evaluated with 20% training data + validation set. A comparison with published SOTA methods on the test set (and 100% training data) is needed to verify generalization. \n\n[1] Wang, Yue, et al. \"Multi-frame to single-frame: knowledge distillation for 3d object detection.\" arXiv preprint arXiv:2009.11859 (2020).\n\n[2] Xu, Qiangeng, et al. \"Spg: Unsupervised domain adaptation for 3d object detection via semantic point generation.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n\n Q1: Please address the weakness part I listed above. I will adjust my rating mainly based on the author's response to this part. \n\nQ2: In section 3.2, the authors mention that the model filters out the outlier points after aggregating the multi-frame point cloud. How is done exactly? \n\nQ3: line 162: more dense -> denser\n The authors adequately addressed the limitations and potential societal impact. ",
" The paper proposes a method to boost the performance of object detectors in 3D LiDAR scans by learning an auxiliary task to densify them (in a self-supervised way, by distilling intermediate dense features and regressing densified point cloud in a DAE fashion). It is fairly detector-agnostic and can thus be applied to different base SOTA detectors such as CentrePoint [1] or SECOND [15]. It improves their performance by 2–5 p.p. at a cost of adding 15–20% to the runtime, which is a reasonable trade-off. I think the paper proposes a simple, practical, and universal method, which is sufficiently evaluated, and the paper is well written. I vote to accept it, although I am not very confident as 3D detection is not really my area.\n\nClarity:\n* (+) the paper is generally well written and has sufficient details,\n* (−) I think the paper will benefit from explaining the background on how anchor-free and anchor-based methods work; related to that: what is L_reg, specifically? Is it regressing centres of bounding boxes, or corners, or else?\n* (=) the hyperparameters are anisotropic, like grid cell size in l. 146; the lateral bounds in l. 230 are 75.2 m which is both large and weirdly specific – is it really so? then why?\n\nMethod:\n* (+) reasonable design; quite simple – (almost) all added complexity is motivated by ablation,\n* (+) the proposed improvement is orthogonal to the design of the underlying detector, so can be universally used;\n* (+) if I understand correctly, the densified point cloud is usually obtained from scan sequences, which is probably a common setting, so the method is applicable to a wide range of practical problems,\n* (−) PCR module predicts opacities and offsets – if the point cloud in that cell approximates a surface, those offsets are ambiguous in 2 dimensions; is it useful to predict them at all? I suggest to run an ablation where offsets are dropped; \n* (=) for PCR: as a future work you can try predicting a different parametrisation like an SDF predicted with an MLP in each cell; it can be naturally supervised with point clouds;\n\nExperiments:\n* (+) comparison to SOTA seems sufficient (although I am not an expert);\n* (+) ablation study has all necessary baselines (except for dropping offsets in PCR),\n* (−) however, ablation results are only reported on 2 out of 6 categories (no pedestrians at all);\n* (+) Table 3 shows that the benefit of densification increases with the distance to the object,\n* (±) latency analysis in Section 4.6 is nice, however I am not confident 100 samples are sufficient – can you compute confidence intervals?\n\n==========================\n\nTypos / wording:\n* l. 18: missing reference,\n* l. 40: semantic points – what are these?\n* l. 76: should not be there,\n* l. 163: what is BEV?\n* ll. 207, 267: adopt ← adapt.\n\n The only substantial issue to address is missing ablations: L1 and pedestrian category, and skipping predicting the offset.\n They are addressed in conclusions; I cannot think of anything beyond it that is relevant to the proposed extension.",
" This paper presents a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. And experiments on Waymo Open Dataset show promising results. 1. By learning the feature representation of the point cloud in the dense point cloud space and then transferring it to the sparse point cloud space, this idea is relatively novel and feasible.\n\n2. Although the transition from dense to sparse is a good idea, there is still a large gap from dense to sparse. Especially when there is severe occlusion, this gap can be particularly large. And this case is a very common problem in the large practical application of point cloud target detection. So what about the performance in this bad case? \n\n3. The process of learning the feature representation of the point cloud in the dense point cloud space is actually an upsampling of the point cloud, and the method of voxelization is actually not an optimal way. Have you tried any other way? And have you compared the effects of different upsampling methods?\n\n4. The method in this paper is mainly used for experiments on the waymo dataset, so how about the performance on other datasets? Such as KITTI, nuScenes.\n\n5. The method in this paper is only compared with a few methods in the experimental results. In fact, there are many excellent works in the point cloud target detection task, but the comparison with them is missing. see part ‘Strengths And Weaknesses’. see part ‘Strengths And Weaknesses’."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
5
] | [
"aTQySFUbBZt",
"nips_2022_P6uZ7agiyCT",
"cEebHkVF-n1",
"k2nVnSah5Rs",
"58w803SWyX",
"lYsG8ZFgAwy",
"nips_2022_P6uZ7agiyCT",
"nips_2022_P6uZ7agiyCT",
"nips_2022_P6uZ7agiyCT",
"nips_2022_P6uZ7agiyCT"
] |
nips_2022_ByYFpTwgLGO | Deep Multi-Modal Structural Equations For Causal Effect Estimation With Unstructured Proxies | Estimating the effect of intervention from observational data while accounting for confounding variables is a key task in causal inference. Oftentimes, the confounders are unobserved, but we have access to large amounts of additional unstructured data (images, text) that contain valuable proxy signal about the missing confounders. This paper argues that leveraging this unstructured data can greatly improve the accuracy of causal effect estimation. Specifically, we introduce deep multi-modal structural equations, a generative model for causal effect estimation in which confounders are latent variables and unstructured data are proxy variables. This model supports multiple multimodal proxies (images, text) as well as missing data. We empirically demonstrate that our approach outperforms existing methods based on propensity scores and corrects for confounding using unstructured inputs on tasks in genomics and healthcare. Our methods can potentially support the use of large amounts of data that were previously not used in causal inference | Accept | Reviewers agreed the paper presents a novel method addressing an important problem, building on and expanding prior work in the field. Specifically, it strongly relates to the CEVAE model, adding to it the ability to deal with multiple proxies each with a different structure, as well as introducing a new inference approach. Extensive experimental evaluation (some following the reviews) shows overall strong results. There were concerns with the somewhat limited level of novelty, and lack of theoretical foundations.
NB: The definition of ITE in the paper should in fact be CATE, the Conditional Average Treatment Effect, as it is conditioned on a variable X=x, and not a singe unit's effect. | train | [
"pUrtoRbeNJE",
"BH_U5BCxGV",
"6f5QEsBv5LU",
"OLCHt440rpm",
"MuWTRETeX9s",
"ooU9jAc0iOp",
"iHEIrlMOhOx",
"ooFjL4M_tki",
"Teu1hbtQgLi",
"pNdRmuEXwh0",
"_BvvogV-EC-",
"JNHchPwGhH",
"M2obpj9-UwF",
"SLcdPwRjJaW"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Experiments with Modified Causal Graph Structures**\n\nIn our previous response, we argued that our core method handles modified causal graph structures with possibly small modifications. Here, we perform a simulation study to confirm these results empirically. In our experiments, we find that DMSE __recovers the ATE with similar accuracy on the modified and unmodified graph structures__ and its __error tends to zero with increasing dataset sizes__.\n\nBelow, we use an extension of the synthetic dataset used in our paper: see **Appendix G.5** for the full data generating process. Table 1 below demonstrates that DMSE recovers ATE under modified causal graph structures. We demonstrate that with increasing dataset size, the ATE error on test dataset continues to fall. Thus, with dataset size approaching infinity, we can recover the true ATE as long as the model class contains true distribution and our optimizer can find the minimum.\n\n**Table 1: DMSE under alternative graph structures**\n| Causal Graph | Training Dataset Size | ATE error (train+val) |ATE error (test)\n|--- |--- |--- |---|\n|Dataset A: Original causal graph|100 | 0.1966 (0.0389) | 0.3590 (0.0564)\n|| 1000 | 0.0575 (0.0103) | 0.1155 (0.0220)\n|| 10000 | 0.0335 (0.0127) | 0.0303 (0.0140)\n|| 25000 | 0.0274 (0.0055) | 0.0292 (0.0058)\nDataset B: Some inputs are observed confounders|100 | 0.1280 (0.0184) | 0.2770 (0.0401)\n|| 1000 | 0.0320 (0.0099) | 0.0951 (0.0264)\n|| 10000 | 0.0152 (0.0038) | 0.0237 (0.0049)\n|| 25000 | 0.01951 (0.0058) | 0.0204 (0.0063)\n|Dataset C: Some inputs are observed confounders|100 | 0.1354 (0.0361) | 0.1970 (0.0561)\n|| 1000 |0.0596 (0.0191) | 0.1060 (0.0214)\n|| 10000 | 0.0315 (0.0055) | 0.0328 (0.0113)\n|| 25000 | 0.0140 (0.0039) | 0.0226 (0.0072)\n|Dataset D: Some input proxies are not conditionally independent|100 | 0.1391 (2.09E-02) | 0.2300 (5.44E-02)\n|| 1000 |0.0596 (0.0103) | 0.1012 (0.0223)\n|| 10000 | 0.0096 (0.0022) | 0.029 (0.0058)\n|| 25000 | 0.0139 (0.0033) | 0.0206 (0.0045)\n\nIn Table 2, DMSE also compares favorably with CEVAE and recovers the ATE in this extended setting. In this extended setting, we have a combination of structured and unstructured input modalities. CEVAE takes a concatenation of these modalities as its input while DMSE has a separate model and inference network for each modality. Hence DMSE can handle diverse modality types gracefully as compared to CEVAE.\n\nThus, in response to questions from reviewers **5MY4** and **zkCX**, we empirically demonstrate that our methods can work with modified causal graph structures (all input covariates do not need to be conditionally independent unstructured proxies). This supports the discussion we have newly added in **Appendix G** during the rebuttal period. \n\n**Table 2: Comparison of CEVAE and DMSE under alternative graph structures**\n| Causal Graph | CEVAE | | DMSE | |\n| --- | --- | --- | --- | --- |\n| | ATE error (train+val) |ATE error (test) | ATE error (train+val) | ATE error (test)|\n| Dataset A: Original causal graph |0.0636 (0.0244) | 0.0752 (0.0276) | 0.0335 (0.0127) | 0.0303 (0.0140) |\n| Dataset B: Some inputs are observed confounders | 0.0522 (0.0134) | 0.0498 (0.0148) | 0.0152 (0.0038) | 0.0237 (0.0049) |\n| Dataset C: Some inputs are observed confounders | 0.0591 (0.0145) | 0.0671 (0.0180) |0.0315 (0.0055) | 0.0328 (0.0113) |\n| Dataset D: Some input proxies are not conditionally independent | 0.0375 (0.0141) | 0.0539 (0.0075) | 0.0096 (0.0022) | 0.029 (0.0058) |\n",
" Below, we are adding additional experiments and simulations to support our claims. We report __additional experiments__ on two settings: modified causal graph structures and a head-to-head comparison of DMSE vs. CEVAE.\n\n**Comparing DMSE with CEVAE**\n\nIn our previous response, we identified key differences between DMSE and CEVAE: (1) a modified causal graph architecture that supports multiple unstructured and possibly missing proxies; (2) improved inference algorithms. We now show that these differences lead to differences in performance in practice by performing a simulated experiment. In all settings, we **outperform the popular CEVAE model**.\n\nFirst, we generate synthetic data using a process similar to the synthetic experiment already present in our paper (see Appendix G.5 for details). There is only one modality in this experiment; the key difference between the two models in the inference procedure. Table 1 shows that even in this setting, DMSE recovers the ATE more accurately than CEVAE.\n\n**Table 1: Comparison of DMSE with CEVAE on one modality**\n| Model | ATE error (train+val) | ATE error (test) |\n| --- | --- | --- |\n| CEVAE | 6.37E-02 (1.62E-02) | 6.41E-02 (1.78E-02) |\n| DMSE | 3.28E-02 (3.96E-03) | 3.33E-02 (4.49E-03) |\n\nNext, we compare CEVAE vs. DMSE when the data has **many unstructured modalities** in Table 2. We generate synthetic data from K modalities. The details on this are in **(Appendix G.5) (Dataset E)**. The CEVAE model treats them as one concatenated vector; DMSE models them as separate vectors. As expected, DMSE handles large numbers of modalities better than CEVAE.\n\n**Table 2: Comparison of DMSE with CEVAE under increasing number of input proxies**\n| Number of input modalities | CEVAE | | DMSE | | % improvement made by DMSE w.r.t CEVAE | |\n| --- | --- | --- | --- | --- | --- | --- |\n| | ATE error (train+val) | ATE error (test) | ATE error (train+val) | ATE error (test) | ATE error (train+val) | ATE error (test) |\n| 5 | 0.0533 (0.0165) | 0.0663 (0.0244) | 0.0421 (0.0045) | 0.0472 (0.0166) | 21.0% | 28.8% |\n| 10 | 0.0381 (0.0122) | 0.0425 (0.0148) | 0.0296 (0.0040) | 0.0334 (0.0052) | 22.5% | 21.5% |\n| 15 | 0.0465 (0.0062) | 0.0545 (0.0112) | 0.0350 (0.0066) | 0.0408 (0.0011) | 24.7% | 25.1% |\n| 20 | 0.0764 (0.0178) | 0.0738 (0.0164) | 0.0407 (0.0087) | 0.0383 (0.0054) | 46.6% | 48.1% |\n\n\n",
" **Comparing of DMSE with CEVAE**\n\nIn our previous response, we identified key differences between DMSE and CEVAE: (1) a modified causal graph architecture that supports multiple unstructured and possibly missing proxies; (2) improved inference algorithms. We now show that these differences lead to differences in performance in practice by performing a simulated experiment. In all settings, we **outperform the popular CEVAE model**.\n\nFirst, we generate synthetic data using a process similar to the synthetic experiment already present in our paper (see Appendix G.5 for details). There is only one modality in this experiment; the key difference between the two models in the inference procedure. Table 1 shows that even in this setting, DMSE recovers the ATE more accurately than CEVAE.\n\n**Table 1: Comparison of DMSE with CEVAE on one modality**\n| Model | ATE error (train+val) | ATE error (test) |\n| --- | --- | --- |\n| CEVAE | 6.37E-02 (1.62E-02) | 6.41E-02 (1.78E-02) |\n| DMSE | 3.28E-02 (3.96E-03) | 3.33E-02 (4.49E-03) |\n\nNext, we compare CEVAE vs. DMSE when the data has **many unstructured modalities** in Table 2. We generate synthetic data from K modalities. The details on this are in **(Appendix G.5) (Dataset E)**. The CEVAE model treats them as one concatenated vector; DMSE models them as separate vectors. As expected, DMSE handles large numbers of modalities better than CEVAE.\n\n**Table 2: Comparison of DMSE with CEVAE under increasing number of input proxies**\n| Number of input modalities | CEVAE | | DMSE | | % improvement made by DMSE w.r.t CEVAE | |\n| --- | --- | --- | --- | --- | --- | --- |\n| | ATE error (train+val) | ATE error (test) | ATE error (train+val) | ATE error (test) | ATE error (train+val) | ATE error (test) |\n| 5 | 0.0533 (0.0165) | 0.0663 (0.0244) | 0.0421 (0.0045) | 0.0472 (0.0166) | 21.0% | 28.8% |\n| 10 | 0.0381 (0.0122) | 0.0425 (0.0148) | 0.0296 (0.0040) | 0.0334 (0.0052) | 22.5% | 21.5% |\n| 15 | 0.0465 (0.0062) | 0.0545 (0.0112) | 0.0350 (0.0066) | 0.0408 (0.0011) | 24.7% | 25.1% |\n| 20 | 0.0764 (0.0178) | 0.0738 (0.0164) | 0.0407 (0.0087) | 0.0383 (0.0054) | 46.6% | 48.1% |\n\n\n",
" We thank the reviewer for their response. The reviewer would like to see additional experiments and simulations to support our claims. Below, we report __additional experiments__ on two settings: modified causal graph structures and a head-to-head comparison of DMSE vs. CEVAE.\n\n**Experiments with Modified Causal Graph Structures**\n\nIn our previous response, we argued that our core method handles modified causal graph structures with possibly small modifications. Here, we perform a simulation study to confirm these results empirically. In our experiments, we find that DMSE __recovers the ATE with similar accuracy on the modified and unmodified graph structures__ and its __error tends to zero with increasing dataset sizes__.\n\nBelow, we use an extension of the synthetic dataset used in our paper: see **Appendix G.5** for the full data generating process. Table 1 below demonstrates that DMSE recovers ATE under modified causal graph structures. We demonstrate that with increasing dataset size, the ATE error on test dataset continues to fall. Thus, with dataset size approaching infinity, we can recover the true ATE as long as the model class contains true distribution and our optimizer can find the minimum.\n\n**Table 1: DMSE under alternative graph structures**\n| Causal Graph | Training Dataset Size | ATE error (train+val) |ATE error (test)\n|--- |--- |--- |---|\n|Dataset A: Original causal graph|100 | 0.1966 (0.0389) | 0.3590 (0.0564)\n|| 1000 | 0.0575 (0.0103) | 0.1155 (0.0220)\n|| 10000 | 0.0335 (0.0127) | 0.0303 (0.0140)\n|| 25000 | 0.0274 (0.0055) | 0.0292 (0.0058)\nDataset B: Some inputs are observed confounders|100 | 0.1280 (0.0184) | 0.2770 (0.0401)\n|| 1000 | 0.0320 (0.0099) | 0.0951 (0.0264)\n|| 10000 | 0.0152 (0.0038) | 0.0237 (0.0049)\n|| 25000 | 0.01951 (0.0058) | 0.0204 (0.0063)\n|Dataset C: Some inputs are observed confounders|100 | 0.1354 (0.0361) | 0.1970 (0.0561)\n|| 1000 |0.0596 (0.0191) | 0.1060 (0.0214)\n|| 10000 | 0.0315 (0.0055) | 0.0328 (0.0113)\n|| 25000 | 0.0140 (0.0039) | 0.0226 (0.0072)\n|Dataset D: Some input proxies are not conditionally independent|100 | 0.1391 (2.09E-02) | 0.2300 (5.44E-02)\n|| 1000 |0.0596 (0.0103) | 0.1012 (0.0223)\n|| 10000 | 0.0096 (0.0022) | 0.029 (0.0058)\n|| 25000 | 0.0139 (0.0033) | 0.0206 (0.0045)\n\nIn Table 2, DMSE also compares favorably with CEVAE and recovers the ATE in this extended setting. In this extended setting, we have a combination of structured and unstructured input modalities. CEVAE takes a concatenation of these modalities as its input while DMSE has a separate model and inference network for each modality. Hence DMSE can handle diverse modality types gracefully as compared to CEVAE.\n\nThus, in response to questions from reviewers **5MY4** and **zkCX**, we empirically demonstrate that our methods can work with modified causal graph structures (all input covariates do not need to be conditionally independent unstructured proxies). This supports the discussion we have newly added in **Appendix G** during the rebuttal period. \n\n**Table 2: Comparison of CEVAE and DMSE under alternative graph structures**\n| Causal Graph | CEVAE | | DMSE | |\n| --- | --- | --- | --- | --- |\n| | ATE error (train+val) |ATE error (test) | ATE error (train+val) | ATE error (test)|\n| Dataset A: Original causal graph |0.0636 (0.0244) | 0.0752 (0.0276) | 0.0335 (0.0127) | 0.0303 (0.0140) |\n| Dataset B: Some inputs are observed confounders | 0.0522 (0.0134) | 0.0498 (0.0148) | 0.0152 (0.0038) | 0.0237 (0.0049) |\n| Dataset C: Some inputs are observed confounders | 0.0591 (0.0145) | 0.0671 (0.0180) |0.0315 (0.0055) | 0.0328 (0.0113) |\n| Dataset D: Some input proxies are not conditionally independent | 0.0375 (0.0141) | 0.0539 (0.0075) | 0.0096 (0.0022) | 0.029 (0.0058) |\n",
" I sincerely thank the authors for their significant effort. I appreciate the extensions and derivations of the current model in Appendix G.\nI thank the authors for discussing the possibility of other causal structures and how to identify the inconsistency of the model using the hold-out dataset. I would like to see a discussion or a simulation in the updated version. I maintain my current score.",
" The authors have addressed my concerns in their rebuttal. \nAs a result, I have increased my score accordingly.",
" **Discussion of Reviewer Questions About Model Structure**\n\n*Why is only the causal structure shown in Figure 1 discussed?*\n \nWe first note that Figure 1 describes a sensible structure for our setting: unstructured data are most likely proxies and are unlikely to influence other variables (e.g., pixel values are unlikely to have a direct causal effect on either X, Y, or T). The proxies can also be structured in our framework (e.g., in our IHDP experiments).\n\nHowever, we acknowledge that other structures are possible. As discussed above, we have introduced extensions to when there are dependencies among the proxies, and some covariates are fully observed (**Appendix G**). These are simple extensions of our framework in Figure 1.\n\n*What are the relevant issues, such as whether it is possible to model it using a generative model?*\n \nAssuming the causal structure holds, we may fit a generative model in order to recover the data distribution. There are several considerations:\n\n- *Does the causal structure yield identifiable causal effects?* We prove that our structure does (Theorem 1).\n\n- *Are we able to fit the model to the data distribution?* We take an approach of fitting an expressive neural estimator that can approximate the data distribution well.\n\n- *Can we obtain an estimate of ITE from the model once we learn it?* We derive an expression for the ITE as a function of the estimated model and provide approximate variational inference algorithms (Section 4) for calculating that expression.\n\nSome previous works (e.g., Miao et al.) fit a generative model with a simple (e.g., linear) parameterization. This yields guarantees on being able to recover the true data distribution if it also has that simple parameterization. However, in practice, the latter assumption is rarely true. An alternative approach is to use a very flexible neural estimator. It is not guaranteed to recover the true distribution in theory, but in practice, it tends to fit the data distribution better than simpler models.\n\n*Is it able to estimate the ATE from it? What might go wrong if the underlying model is inconsistent?*\nYes, we can estimate the ATE from the model using the procedure applied above. However, as we just mentioned this assumes that (1) our model class includes the true data distribution; (2) we have enough data to learn the model in our class; (3) our learning algorithm (objective function optimizer) can identify the true model (assuming we had infinite data); (4) we can compute ITE estimates from the model. Failure modes (1) and (2) hold for any generative models; (3) and (4) are more specific to deep generative models. In practice, we can attempt to verify if failure modes (1), (2), (3) occurred by evaluating the generative model on a hold-out dataset (and we identify these failures given enough data). We can also try to mitigate failure (4) by using a more sophisticated inference algorithm (e.g., MCMC), but that remains a direction for future work.\n\n**Discussion of Paper by Miao et al.**\n\nOur work is of a similar flavor to that of Miao et al., with two key differences: Miao et al. study the setting of simple (non-neural) dependencies among variables. Our work can be seen as extending their model to neural parameterizations. Since we use neural models, we cannot provide rigorous guarantees (e.g., we don’t know if the neural network will converge); however, our neural models can incorporate unstructured data, leading to improved performance.\n\n\n\n\n**STAR Experiment Table**: \nThank you for pointing out that having this table in the main text might be more useful. We moved the Table 6 on STAR experiments to the appendix due to space limitations since the IHDP experimental table demonstrated similar results. We can move this table to the main paper so that it is easier to connect the experimental results together. \n\n\n\n\nCitations: \n\n[1] Pearl, J. Causality. Cambridge University Press, 2009\n",
" We thank the reviewer for detailed comments and feedback on our paper. \n\n**On Causal Graph Assumptions**:\n\nThe reviewer’s statement can also be interpreted more broadly as a concern that all $X_i$ are conditionally independent proxy variables. To address this potential concern, **we derive new extensions to our methods** to the following settings:\n\n- In addition to proxy variables $X_i$, we also observe a covariate $V$ that represents observed confounders (i.e., we always have it in the data, and it influences $Y$, $T$).\n- The proxy variables may have mutual dependencies, i.e. **$X_i$** and **$X_j$** may be connected by edges in the causal graph.\n\nWe formally define these extensions in a new **Appendix G**, and we also derive the following results:\n\n- The true causal effect of **$T$** is identifiable in the more general setting presented above (**Theorem 2**).\n- We propose a new model family, **DMSE-V**, which extends **DMSE** to the above setting via simple modifications (e.g., the introduction of a new variable).\n- We derive learning and inference algorithms for **DMSE-V**. The result of these algorithms is an estimator for the ATE and ITE that holds in the extended setting.\n - The learning and inference algorithms are minor modifications to our existing ones. In particular, they involve adding the extra variable $V$ as input to the distributions $P(Y | T, Z, V), P(T | Z, V), Q(Z | Y, T, V, X)$, and the learning algorithms remain the same.\n\nFor full details, please see **Appendix G**. \n\n**On comparing our methods with CEVAE/other VAE based estimators**:\n\nThe reviewer is concerned about the novelty of our method relative to previous estimators of ATE/ITE based on deep structural equations and generative models (e.g., CEVAE).\n\nWhile our method is an instance of generative models, we identify the following key differences:\n\n- We propose **new generative model architectures** that extend existing models (e.g., DSE, CEVAE) to multiple proxies $X_i$, each possibly coming from a different modality.\n- We derive **novel inference algorithms** for these extended models, which have the following benefits:\n - Our algorithms scale better to large sets of modalities by leveraging the independence structure of the $X_i$. \n - Our inference algorithms naturally handle missing $X_i$.\n - They are also simpler: they don’t require auxiliary networks (e.g., like in CEVAE).\n- Lastly, our key contribution is that we demonstrate the effectiveness of generative models at **modeling unstructured proxies** (many previous methods instead relied on propensity scoring).\n\n\n\n",
" We thank the reviewer for detailed comments and feedback on our paper. \n\n**On The Novelty of Our Techniques**:\n\nThe reviewer is concerned about the novelty of our method relative to previous estimators of ATE/ITE based on deep structural equations and generative models (e.g., CEVAE).\n\nWorking with multiple unstructured modalities (some of which may be missing) requires us to develop novel approximate variational inference techniques that improve over existing generative models. Specifically, we identify the following key differences:\n\n- We propose **new generative model architectures** that extend existing models (e.g., DSE, CEVAE) to multiple proxies $X_i$, each possibly coming from a different modality.\n- We derive **novel inference algorithms** for these extended models, which have the following benefits:\n - Our algorithms scale better to large sets of modalities by leveraging the independence structure of the $X_i$. \n - Our inference algorithms naturally handle missing $X_i$.\n - They are also simpler: they don’t require auxiliary networks (e.g., like in CEVAE).\n- Lastly, our key contribution is that we demonstrate the effectiveness of generative models at **modeling unstructured proxies** (many previous methods instead relied on propensity scoring).\n\n- Using modality specific inference networks also allows us to use modality-specific architectures separately, allowing us to process diverse modalities like genomic sequences, images or tabular data at the same time.\n\n**On Additional Questions**\n\n*Clarification on ‘Deep’ Structural Equations*\n\nThe reviewer is right in pointing out that linear structural equations by themselves cannot learn representations. For this reason, we use a deep neural network parameterization to extract useful features from unstructured information. For example, we can use neural networks specific to a modality (e.g CNNs) to extract features like age or gender from the image of a person. \n\n*ELBO derivation*:\n\nOur reconstruction term is $E_q \\log (p(x, y, t|z))$. We utilize the conditional independencies within the causal graph in Figure 1 to factorize $p(x, y ,t | z)$. \n\n$E_q \\log (q(z| x, t, y)/p(z))$ corresponds to the KL divergence term. Please refer to **Appendix H.1** for a detailed derivation. \n\n*On the derivation for factorization of posterior:*\n\n$p(z|x,t,y) ∝ (p(z|t, y) \\prod_{j=0}^{j=m}p(z|x_j))/( \\prod_{j=0}^{j=m-1} p(z))$\n\nThis factorization is derived by using the conditional independencies between the input modalities $x_i$ given hidden confounder $z$ as implied by the causal graph. We apply Bayes rule to obtain conditioning of variables on $z$ and then exploit the conditional independencies to factorize the distribution. Please refer to **Appendix H.2** for full derivation. \n\nHere we assume that if the true posterior components $p(z|x_i)$ and $p(z|t, y)$ are contained in the variational counterparts $q(z|x_i)$ and $q(z| t, y)$ respectively, then we can obtain the factorization as approximately equal to $q(z|t, y) \\prod_{j=0}^{j=m}(q(z| x_j) ) / (\\prod_{j=0}^{j=m-1} p(z))$\n\n\n*Why are auxiliary networks not necessary?*\n\nThe CEVAE involves using additional auxiliary networks $Q(T|X)$ and $Q(Y|X, T)$ when inferring the posterior $Q(Z|X)$ while estimating treatment effects [1]. Since we use a product-of-experts formulation, we do not need to train these additional networks. We can compute $Q(Z|X)$ from the $Q(Z|X_i)$ networks trained over each individual modality $X_i$ as shown in equation (7) in our paper, which also allows us to handle missing modalities gracefully.\n\nWe will modify the descriptions of the above parts so that the equations are easier to follow. \n\nWe would also like to point the reviewer to additional discussion in **Appendix G** concerning alternative causal graph structures added in response to other reviewer comments. \n\n[1] Christos Louizos, Uri Shalit, Joris Mooij, David Sontag, Richard Zemel, Max Welling. Causal Effect Inference with Deep Latent-Variable Models. 2017\n\n",
" **Answering Remaining Reviewer Questions**\n\nNote that we already compare against VAE-based approaches, e.g., in the missing data experiment (Section 5.4) and on GWAS (Section 5.3). The single-modality DSEs are effectively comparable to existing VAE-based methods (e.g., CEVAE), and our multi-modal approach performs better.\n\nExamples of proxy variables admissible in our models include:\n- Unstructured proxy variables: e.g., wearable sensor time series data as a proxy for a patient’s health\n- Structured proxy variables: e.g., BMI as a proxy of the subject’s health\n- In the aforementioned extension, we also admit variables V that correspond to observed confounders: e.g., a patient’s smoking status.\n\n**Answering specific minor issues/concerns**:\n\n*Confounder is not defined in this paper*: We define Z as confounder variable in the background section.\n\n*Are \"unstructured proxy variables\", \"unstructured multi-modal proxy variables\" and \"unstructured data\" the same in this work? It is better to keep consistent*.\n\nThe unstructured data available in modern datasets can be used as proxies to extract hidden confounder during causal effect estimation. This unstructured data can be in the form of multiple modalities (e.g images, tabular data, genomic sequence data, etc). We will explain this better in the final version of the paper. \n\n*Line 143 to 147, It would be better to provide a more detailed explanation. The current evidence and conclusion given are somewhat sloppy*.\n\nHere, we are using product-of-experts formulation to infer the posterior. We will add a detailed derivation of the factorization in the appendix so that this is clearer. In lines 143 to 147, we point out that we can compute the posterior in closed form when the terms on the right-hand side in Equation 6 are Gaussians. In this special case, it is possible to compute the product of these distributions as another Gaussian with mean and standard deviation as specified in these lines. We will make this clearer and add more explanation on this to the appendix. Please also refer to the derivations provided in **Appendix H**. \n\n*The conclusion in Section 5.1 is not clear from the current descriptions.*\nThis section demonstrates a toy example where the causal model (based on our deep structural equations) produces better ATE estimates on a test dataset. We show that substituting a binary input variable with unstructured image modality does not degrade the ATE estimates. Thank you for pointing out that some parts were not clear here - we will address this in the final version of the paper. \n\n\nWe thank the reviewer for pointing out the following typos. We will correct these. \n*multiple multi-model, it is better to remove `multiple'*.\n*After Line 198, \"1\", \"0\" -> ``1\", ``0\"*",
" We thank the reviewer for detailed comments and feedback on our paper. \n\n**On the assumption that all input covariates are unstructured proxies**:\n\nThe reviewer is concerned that our model treats all input covariates as unstructured proxies. First of all, **this is a misunderstanding**: in our experiment, we use both structured and unstructured proxies.\n\nThe reviewer’s statement can also be interpreted more broadly as a concern that all $X_i$ are conditionally independent proxy variables. To address this potential concern, we derive new extensions to our methods to the following settings:\nIn addition to proxy variables $X_i$, we also observe a covariate $V$ that represents observed confounders (i.e., we always have it in the data, and it influences $Y$, $T$).\nThe proxy variables may have mutual dependencies, i.e. **$X_i$** and **$X_j$** may be connected by edges in the causal graph.\n\nWe formally define these extensions in a new **Appendix G**, and we also derive the following results:\n\n- The true causal effect of **$T$** is identifiable in the more general setting presented above (**Theorem 2**).\n- We propose a new model family, **DMSE-V**, which extends **DMSE** to the above setting via simple modifications (e.g., the introduction of a new variable).\n- We derive learning and inference algorithms for **DMSE-V**. The result of these algorithms is an estimator for the ATE and ITE that holds in the extended setting.\n - The learning and inference algorithms are minor modifications to our existing ones. In particular, they involve adding the extra variable $V$ as input to the distributions $P(Y | T, Z, V), P(T | Z, V), Q(Z | Y, T, V, X)$, and the learning algorithms remain the same.\n\n**On comparing our methods with CEVAE/other VAE based estimators**:\n\nThe reviewer is concerned about the novelty of our method relative to previous estimators of ATE/ITE based on deep structural equations and generative models (e.g., CEVAE).\n\nWhile our method is an instance of generative models, we identify the following key differences:\n\n- We propose **new generative model architectures** that extend existing models (e.g., DSE, CEVAE) to multiple proxies $X_i$, each possibly coming from a different modality.\n- We derive **novel inference algorithms** for these extended models, which have the following benefits:\n - Our algorithms scale better to large sets of modalities by leveraging the independence structure of the $X_i$. \n - Our inference algorithms naturally handle missing $X_i$.\n - They are also simpler: they don’t require auxiliary networks (e.g., like in CEVAE).\n- Lastly, our key contribution is that we demonstrate the effectiveness of generative models at **modeling unstructured proxies** (many previous methods instead relied on propensity scoring).\n\n**Theoretical Justifications for The Proposed Method**\n\nOur graphical structure is directly inspired by theoretical results for models with similar structures, but that assume linear (rather than neural) functions between the variables.\n\nMost notably, Kuroki and Pearl give identifiability results for when the latent $Z$ and observed $X_i$’s are discrete, and $X_i$’s are different views of $Z$. The model is identifiable precisely only when there are at least two conditionally independent proxy variables. Our assumption of multiple independent proxies mirrors theirs.\n\nWhen $Z$ and $X_i$’s are continuous, a natural first step is the linear case: when $X_i$’s are different noisy and potentially missing views of the latent $Z$, and that the outcomes $Y$ are linear in $Z$. Under appropriate regularity conditions, Kallus et al. 2018 proved a PAC bound showing that their matrix completion method can recover the subspace of $Z$ w.h.p. (Theorem 2) and hence recover the ATE (Theorem 3). Hence, our model is identifiable if we choose a linear (non-neural parametrization); unfortunately, such linear models do not handle unstructured variables well in practice.\n\nFor the general, non-linear setting, finite-sample PAC analysis remains an open question to the best of our knowledge. Wang and Blei’s deconfounder method provides a non-parametric identification result, but they operate under the strong and unverifiable assumption that every (multi-cause) confounder is *pinpointed* by the observed data (Assumption 2). Our method can be framed in this way, in which we assume that our generative model can reconstruct the latent $Z$ using the multi-modal observations. \n\n**Citations**: \n\nPearl, J. Causality. Cambridge University Press, 2009\n\nKuroki, Manabu, and Judea Pearl. \"Measurement bias and effect restoration in causal inference.\" Biometrika 101.2 (2014): 423-437.\n\nKallus, Nathan, Xiaojie Mao, and Madeleine Udell. \"Causal inference with noisy and missing covariates via matrix factorization.\" Advances in neural information processing systems 31 (2018).\n\nWang, Yixin, and David M. Blei. \"Towards clarifying the theory of the deconfounder.\" arXiv preprint arXiv:2003.04948 (2020).\n",
" This paper presents novel deep multi-modal structural equations for causal effect estimation from unstructured data with unobserved confounders. The correctness of the developed method relies on the set of rich unstructured proxy variables and perfect modelling. The experiments on synthetic and semi-synthetic datasets show the performance of the developed method. This paper considers a very important problem in causal inference, i.e. estimating the causal effect of intervention from unstructured data with latent confounders. The paper seems to take the main step of CEVAE over some specialized architectures, such as convolutions for images. The idea of the paper is good and interesting. However, to my understanding, the assumption of all covariates are the unstructured proxy variables, seems too strong to be satisfied in many real-world applications. For example, what is the latent confounder of the proxy variable \"sex of baby\" in terms of IHDP? This may not be very practical. Moreover, there are many issues with the presentations that make them not easy to follow. Overall, the paper is not good enough.\n \nSome minors:\n\n*multiple multi-model, it is better to remove `multiple'.\n\n*Confounder does not define in this paper.\n\n*Are \"unstructured proxy variables\", \"unstructured multi-modal proxy variables\" and \"unstructured data\" the same in this work? It is better to keep consistent.\n\n*Line 143 to 147, It would be better to provide a more detailed explanation. The current evidence and conclusion given are somewhat sloppy.\n\n*After Line 198, \"1\", \"0\" -> \\``1\", \\``0\"\n\n*The conclusion in Section 5.1 is not clear from the current descriptions. \n \n=== Strengths ===\n\n1. A novel deep multi-modal structure equation is developed for unstructured data \n\n2. Experiments conducted on a number of datasets show the performance of the developed algorithm. \n \n=== Weaknesses ===\n\nThere is not a theoretical analysis of the developed deep multi-modal structure equations. Q1. Why not compare the developed methods with CEVAE and other VAE-based estimators in the experiments?\n\nQ2. What are the missing modalities $X_j$ in this work? Yes, I have not found a negative societal impact. ",
" This paper addresses causal effect estimation when there are unobserved confounders but there exist (often unused) unconstrained data that could be used as proxies for the unobserved confounders. A model named Deep Multi-modal Structural Equations (DMSE) is proposed to leverage the multi-modal data. The experimental results demonstrate the effectiveness of the proposed method on two real-world tasks.\n Pros:\n- The paper is written clearly and it’s easy to read.\n- The empirical experiments are thorough.\n- The contribution is significant in that the proposed model can leverage often unused and sometimes missing multi-modal data to enhance causal effect estimation performance. \n\nCons:\n- The originality of the paper seems a bit limited as the proposed method is composed of already available components (i.e., VAE, SE).\n- There is little explanation for the main equations stated in the paper. I will comment on these in the “Questions” section.\n - Lines 103-104 state that “our approach uses deep structural equations to extract confounding signal from the multi-modal proxies”. The “deep” part does this not the “structural equations” part, correct? Because SEs cannot do representation learning.\n- How did you arrive at Eq. (5)? ELBO should have a reconstruction error part and a KL divergence part. Please provide detail on how you derived this ELBO.\n- Lines 137-139 state that the posterior factorizes in that way. Could you please elaborate on why this is the case?\n- Line 155: please elaborate why auxiliary inference networks are not necessary in your application?\n N/A\n",
" This work addresses the problem of estimating the unbiased causal effect of an intervention on the outcome using unstructured proxy variables. The authors first formulate the underlying causal structure of the problem. The authors then develop a generative approach, i.e., estimating the parameters of the deep multi-modal structural equations (DMSEs) or Deep Gaussian Structural Equations (DGSEs) via optimizing the derived multi-model evidence lower bound. The causal effects, therefore, can be computed from the learned models. The experiments used real-world datasets (IHDP, STAR, and GWAS datasets) to demonstrate that the developed models can estimate ATE with fewer errors than competing methods. \nStrengths:\n1) The authors provide motivating real-world applications (Healthcare and Genomics) to show the context where a large amount of unstructured data is available and the missing confounders' issues in these respective applications.\n2) The notations, problem formulation, the underlying causal model, the derivation of the objective, and the estimation of causal effects are presented.\n3) Details of the experimental setup are well described in the manuscript and the appendix.\n\nWeaknesses:\n1) The central issue of this work heavily relies on the causal structure they are considering (shown in Figure 1), which I will discuss in more detail in the next section). \n2) In the experimental section if the STAR dataset were used for the experimental validation, its results (i.e., Table 6 in the appendix) should also be presented in the manuscript to help readers read smoothly. \nThis work heavily relies on the assumption that the causal structured is the form $$ X = f_1(Z, \\epsilon_1), T = f_2(Z, \\epsilon_2), Y = f_3(Z, Y, \\epsilon_3). $$ The causal effect estimation from the DMSE or DGSE is unbiased when the causal structure is indeed true. However, many other dependencies between these variables make sense in the real-world setting (such as the existence of the interdependence between proxy variables or the proxy variables also control the distribution of the intervention variable and the outcome variable), which the author did not discuss. \n\nThe following questions should be discussed:\n\nWhy is only the causal structure shown in Figure 1 discussed?\nWhat are the relevant issues, such as whether it is possible to model it using a generative model?\nIs it able to estimate the ATE from it? \nWhat might go wrong if the underlying model is inconsistent? \n\nAlso, the following paper might be helpful for the discussion:\n\nMiao, Wang, Zhi Geng, and Eric J. Tchetgen Tchetgen. \"Identifying causal effects with proxy variables of an unmeasured confounder.\" Biometrika 105, no. 4 (2018): 987-993. Similar to the above discussion, it is beneficial that the authors could discuss the assumption made on the causal model and the possible limitations or societal impact on the real-world applications."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"6f5QEsBv5LU",
"pNdRmuEXwh0",
"OLCHt440rpm",
"MuWTRETeX9s",
"iHEIrlMOhOx",
"Teu1hbtQgLi",
"ooFjL4M_tki",
"SLcdPwRjJaW",
"M2obpj9-UwF",
"_BvvogV-EC-",
"JNHchPwGhH",
"nips_2022_ByYFpTwgLGO",
"nips_2022_ByYFpTwgLGO",
"nips_2022_ByYFpTwgLGO"
] |
nips_2022_E9HNxrCFZPV | NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation | Test-time adaptation (TTA) is an emerging paradigm that addresses distributional shifts between training and testing phases without additional data acquisition or labeling cost; only unlabeled test data streams are used for continual model adaptation. Previous TTA schemes assume that the test samples are independent and identically distributed (i.i.d.), even though they are often temporally correlated (non-i.i.d.) in application scenarios, e.g., autonomous driving. We discover that most existing TTA methods fail dramatically under such scenarios. Motivated by this, we present a new test-time adaptation scheme that is robust against non-i.i.d. test data streams. Our novelty is mainly two-fold: (a) Instance-Aware Batch Normalization (IABN) that corrects normalization for out-of-distribution samples, and (b) Prediction-balanced Reservoir Sampling (PBRS) that simulates i.i.d. data stream from non-i.i.d. stream in a class-balanced manner. Our evaluation with various datasets, including real-world non-i.i.d. streams, demonstrates that the proposed robust TTA not only outperforms state-of-the-art TTA algorithms in the non-i.i.d. setting, but also achieves comparable performance to those algorithms under the i.i.d. assumption. Code is available at https://github.com/TaesikGong/NOTE. | Accept | The paper proposed two test-time adaptation methods a) instance-aware batch normalization and b) prediction-balanced reservoir sampling and used these to show that the proposed method is better in the non-iid setting.
The reviewers found this to be an important problem the experiments generally convincing. Reviewers objected the choice of dataset (not commonly used to evaluate adaptation) and baseline models (not state of the art models) and the effect size. In the end all reviewers found the results strong enough and voted to accept. | val | [
"BnZuReLSjyx",
"i-JEjbkjjgR",
"v4PIY92_5eA",
"1SGzegtgiOu",
"OHT5H4Zab62",
"eD_gavTXdQ4",
"izOEmksbpDL",
"0fIbHJ_26lf1",
"OygYPlYaWop",
"ji80Gi3T9A1",
"8j06jEc9wB",
"sz1muyMVHE-",
"Zj6EtZy_A3qa",
"NWqRST6C9oT",
"OuyoyEm3Ecz",
"qWPYZ3e6xxy",
"MVcWsiacSev",
"pLhU1eCRXO",
"-dw4WhZ8FFi",
"xl8by_jGVvt",
"cvGIFr7ETV-"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response to our rebuttal! We agree that the concerns you pointed out are the current limitations of our study and could be further improved in the future. Still, we believe our contributions make a meaningful step towards the practical applications of the test-time adaptation paradigm, as you acknowledged.\n\nThank you again for the valuable suggestions and comments.\n\nBest, \n\nAuthors. \n\n\n",
" Thank you for carefully reviewing our rebuttal and leaving an acknowledging response to us. We are glad that our rebuttal addressed your concerns. \n\nIn our final draft, we will incorporate your suggestions. Thank you again for your valuable comments and feedback to improve our work. \n\nBest, \n\nAuthors. \n",
" Dear Authors,\n\nI would like to thank the authors for providing additional analysis and evaluations. After reading other reviewers' comments and the responses and revisions I have two concerns not fully addressed which could be further improved in the future. Firstly, using CIFAR10/100-C to demonstrate test-time adaptation on temporally correlated data as the main results and ablation study do not reflect the full potential of the proposed method, therefore future evaluation should focus on some real datasets which have the temporal correlated feature. Secondly, the proposed IABN is only applicable to architectures with BN layers, which is another restriction to the adoption of IABN in more recent backbone networks. In terms of the hypothesised usercase, i.e. autonomous driving, I also suspect whether TTA is really applicable. Given the high demand in safety, adapting pretrained model on-the-fly is not very likely.\n\nOverall, I think this paper has some merits in the problem formulation and I will keep my rating, leaning towards acceptance.",
" Please note that this message acknowledges all four of the responses to the initial review, and I am simply making one post to avoid triggering multiple notifications.\n\n**Summary**: I have raised my score by one as the response has clarified the points of confusion and has provided additional results that (1) compare on the benchmark standard of ImageNet-C and (2) justify the choice of datasets and claims about shift. Most importantly, the responses to Q1, Q2, and Q4 convincingly resolve the potential issues with not benchmarking at larger scale (Q1), possible increase in computational cost (Q2), and whether or not the user activity datasets exhibited shift (Q4).\n\nHaving highlighted these improvements, I should note that revision will be needed if this submission is accepted to prioritize results between the main paper and supplement and to ensure that there is space for related work. I encourage the authors to include their results on ImageNet-C, and to further scale them up to ResNet-50, as that is the most commonly reported combination of dataset and model for test-time adaptation.",
" Thank you for your response to our rebuttal! We are glad that our rebuttal addressed most of your concerns.\n\n---\n**ReQ1. I am still not convinced about the necessarily of reservoir sampling. If inputs are temporally correlates, isn't it natural to prioritize temporally local inputs during adaptation?**\n\nThank you for your follow-up question. As shown in the results of the baselines in our experiments (Table 1-2), prioritizing temporally local inputs during adaptation” **mostly fails** in temporally correlated scenarios. In the background section (Section 2.2) of our original submission, we mentioned that existing studies use only an incoming batch of samples for adaptation, which is the extreme form of prioritizing temporal samples (line 98-101):\n> Although a conventional way of computing BN in test-time is to set $\\mu$ and $\\sigma^2$ as those estimated from _training_ (or source) data, say $\\bar{\\mu}$ and $\\bar{\\sigma}^2$, the state-of-the-art TTA methods based on adapting BN layers [28, 32, 40, 43] instead **use the statistics computed directly from the recent test batch to de-bias distributional shifts** at test-time, ...\n\nWe remark that “prioritizing temporally local inputs” can lead to undesirable (and somewhat unexpected) bias and catastrophic forgetting in our temporally correlated scenarios as explained in our original submission (Section 3, line 112-116): \n> Under scenarios where test data are temporally correlated, however, naively adapting to the incoming batch of test samples [28, 32, 40, 43] could be problematic for both approaches: the batch is now more likely to (a) **remove instance-wise variations** that are actually useful to predict $y$, i.e., the “contents” rather than “styles”' through normalization, and (b) **include a bias in $p(y)$ rather than uniform**, which can negatively affect the test-time adaptation objective such as entropy minimization.\n\nThus, we rather **aim to eliminate temporal correlation** for better calibration of BN layers, which is achievable via our Prediction-Balanced Reservoir Sampling and results in outperforming the baselines in temporally correlated scenarios.\n\n---\n**ReQ2. Besides, in most experimental setup, it seems the labels are also temporally correlates, which means balancing class label would eliminates (most) temporal correlations.**\n\nIn addition to our original response in Q4, if temporal correlation is introduced *within a class*, we might lose instance-wise variations, which leads to catastrophic forgetting. For instance, consecutive frames for the “bicycle” class (Figure 8 in the supplementary file) are very similar and thus do not give useful information for class-specific variations.\n\n---\n\nThank you again for the valuable suggestions and comments.\nIf you have any remaining suggestions or concerns, please let us know!\n\nBest, \n\nAuthors.\n",
" Thank you for your detailed response. It resolve most concerns, however, I am still not convinced about the necessarily of reservoir sampling. If inputs are temporally correlates, isn't it natural to prioritize temporally local inputs during adaptation? Besides, in most experimental setup, it seems the labels are also temporally correlates, which means balancing class label would eliminates (most) temporal correlations. \n\nOverall, I still think the paper provide several interesting findings. I therefore would like to keep my score. ",
" Dear reviewers,\n\nThank you for your time and efforts again in reviewing our paper.\n\nWe kindly remind that the discussion period will end soon (in a few days). \n\nWe believe that we sincerely and successfully address your comments, with the results of the supporting experiments.\n\nIf you have any further concerns or questions, please do not hesitate to let us know.\n\nThank you very much!\n\nAuthors\n\n",
" Dear reviewers,\n\nWe appreciate all of you for your **positive reviews**, and highlighting the strengths of our work: \n- **GmfJ**: very important problem, comprehensive, nice presentations, well-explained background, clear motivation, novel methodology, and effectiveness over baselines.\n- **5mFV**: well-motivated, clear illustration, sound experimental design on standard datasets, strong results, strong baselines, and consistency of hyperparameters.\n- **bg8w**: addressing a practical concern in TTA, and extensive experiments.\n- **AWKz**: timely and important topic, pointing out an important issue of existing studies, and validation with synthetic/realistic benchmarks.\n\nWe also sincerely thank reviewers for their **constructive** comments to improve our manuscript. We have addressed all the questions raised by reviewers with new experiments during this rebuttal period. We summarize how we addressed the **reviewers’ main questions** as follows:\n\n1. **5mFV**: We evaluated NOTE and the baselines on *ImageNet-C*, showing the effectiveness of NOTE on a large-scale dataset. \n2. **5mFV**: We evaluated the performance in the source domain for HARTH and ExtraSensory datasets, demonstrating the difficulty of the problem.\n3. **5mFV**: We evaluated the effect of IABN without re-training the source model. \n4. **GmfJ** and **bg8w**: We discussed and evaluated the applicability of NOTE and the baselines to a Transformer-based model (BERT) and NLP tasks.\n5. **5mFV** and **AWKz**: We clarified missing details of our methodology. We also detailed the figure describing our methodology (Figure 3). \n6. **bg8w**: We included an analysis of real-time accuracy change. \n\nWe submitted our **revised draft and supplementary file** that addressed individual concerns. We marked changed parts with blue fonts. In summary, we made the following changes:\n1. We changed Figure 3 (method overview) for a better illustration of IABN.\n2. We specified the output of IABN.\n3. We elaborated on the hyperparameters of our method.\n4. We detailed the process of adaptation and inference of our method.\n5. For tables, we marked degraded performance after adaptation in red fonts.\n6. We renamed “real distributions with domains shift” to “real-distributions with domain shift” to avoid confusion.\n7. We explained why LAME (one of the baselines) does not work on i.i.d. scenarios. \n8. We specified the model architectures for the HARTH and ExtraSensory models.\n9. We revised our discussion about models without BN layers.\n10. We included an evaluation of the real-time performance changes for the real-distribution datasets.\n11. We included an evaluation of the error rates on the source domain for the real-distribution datasets.\n12. We included an evaluation of replacing BN with IABN during test time.\n13. To meet the 9-page limit, we temporarily moved the related work section to the supplementary file. \n\n\nThank you for your consideration,\n\nAuthors\n",
" We sincerely appreciate your effort and time in offering us thoughtful comments. We respond to each of your questions and concerns one-by-one in what follows. We also ask you to kindly refer to the *common response* we have posted together.\n\n\n---\n**Comment1. Overall, I think this paper provides a good contribution for the test-time adaptation research, with extensive experiments. However, I found several important details lacking in current manuscripts, including the detail of the proposed method (especially IABN part), explanation about why the proposed method should work better than prior works, and empirical validations. See questions for the detailed comments.**\n\nWe thank Reviewer AWKz for all your clarifying questions that help us to improve the readability of our method section. Please refer to our answers below. We also submitted a revised draft that addressed them.\n\n\n---\n**Q1. (1) Why do we need equation (4) and equation (5)? What happens if we always u_{b, c} to compute outputs?**\n\nEquation (4) and Equation (5) are the key ideas of IABN that make it robust to distributional changes during test time. It corrects the learned batch normalization statistics when the current input is too deviated from the learned distribution.\nIf we use the sample mean $\\tilde{\\mu}\\_{b,c}$ and the sample variance $\\tilde{\\sigma}^2\\_{b,c}$ instead, it is the same as using Instance Normalization, as we originally put in line 143-144: \n> If one chooses too small $k\\geq 0$, IABN may remove useful features, e.g., styles, of input (as with IN), which can degrade the overall classification (or regression) performance [24].\n\n\n---\n**Q2. (2) What is the final output of IABN layer?**\n\nThe output of the IABN layer is similar to BN, except for the notations used in Equation (5). Specifically, IABN replaces $\\mu\\_c$ and $\\sigma\\_c^2$ in BN with ${\\mu}\\_{b,c}^{IABN}$ and $({\\sigma}\\_{b,c}^{ IABN})^2$, respectively:\n\n\n- $\\mathrm{BN}(\\mathbf{f}\\_{:,c,:}; \\mu\\_c, \\sigma^2\\_c) := \\gamma\\cdot\\frac{\\mathbf{f}\\_{:,c,:} - \\mu\\_c}{\\sqrt{\\sigma\\_c^2 + \\epsilon}} + \\beta$ (same as Equation (1))\n\n- $\\mathrm{IABN}(\\mathbf{f}\\_{b,c,:}; \\bar{\\mu}\\_c, \\bar{\\sigma}^2\\_c; \\tilde{\\mu}\\_{b,c}, \\tilde{\\sigma}^2_{b,c}) := \\gamma\\cdot\\frac{\\mathbf{f}\\_{b,c,:} - {\\mu}\\_{b,c}^{\\tt IABN}}{\\sqrt{({\\sigma}\\_{b,c}^{\\tt IABN})^2 + \\epsilon}} + \\beta$\n\nWe believe describing the final output of the IABN layer would improve the readability of the paper. We included the description of the output of IABN in both Section 3.1 and Figure 3, in the revised draft.\n\n---\n**Q3. (3) Regarding the exponential moving average presented in 3.2, do you stop gradient for the past mean and sigma?**\n\nWe don’t need to stop; $\\mu$ and $\\sigma^2$ are not trainable parameters but internal statistics of BN (and IABN) layers. The trainable parameters are $\\gamma$ and $\\beta$ as we described in the original submission (line 166-169):\n> Specifically, we update the normalization statistics, namely the means $\\mu$ and variances $\\sigma^2$, via exponential moving average …. We further optimize the affine parameters, scaling factor $\\gamma$ and bias term $\\beta$ via entropy minimization, similar to a previous study [40].\n\n\n---\n**Q4. (4) Why do we need lines 9-17 for PBRS? In other words, why don’t you randomly discard one instance in memory even if y_t \\in L?**\n\nIf we randomly discard one instance in the set of the majority class(es) $L$, old samples hardly survive as they are candidates to be dropped for a long time, and the memory is likely to have relatively new samples. With Reservoir Sampling (line 9-17), we can ensure the samples that reside in the memory are *time-uniform*. This is beneficial for temporally-correlated streams; time-uniform samples are more balanced in terms of distribution than adjacent samples, and thus estimating the target distribution from time-uniform samples helps the generalization of the model.\n\n\n---\n**Q5. (5) In table 1, why does the value for Source differ between i.i.d setup and non-i.i.d setup?**\n\nThank you for pointing out this issue. We used an old result for the Avg column in non-i.i.d. Setup. We updated this value in the revised paper.\n",
" ---\n**Q6. (6) If I correctly understood the paper, PBRS can be incorporated with any prior studies, e.g., TENT, PL. Why did you choose to combine it with IABN? In other words, is IABN itself perform better than existing methods?**\n\n\nYou are right. Theoretically, PBRS can be combined with baselines (and IABN also can be). However, our motivation for the joint use of IABN and PBRS is to solve two different problems in test-time adaptation under temporally correlated test streams. As we wrote in our original submission line 112-120:\n> Under scenarios where test data are temporally correlated, however, naïvely adapting to the incoming batch of test samples [28 , 32, 40, 43] could be problematic for both approaches: the batch is now more likely to (a) remove **instance-wise variations** that are actually useful to predict $y$, i.e., the “contents” rather than “styles” through normalization, and (b) include a **bias in $p(y)$ rather than uniform**, which can negatively affect the test-time adaptation objective such as entropy minimization. We **propose two approaches to tackle each of the failure modes of adapting BN under temporal correlation**. Our method consists of two components: (a) Instance-Aware Batch Normalization (IABN) (Section §3.1) to overcome the limitation of BN under distribution shift and (b) Prediction-Balanced Reservoir Sampling (PBRS) (Section §3.2) to combat with the temporal correlation of test batches. \n\n\nAs demonstrated in our ablation study (Table 3), IABN-only still shows the lowest error in CIFAR10-C (24.6%) and the second lowest error in CIFAR100-C (54.5%), under temporally correlated test streams. When IABN and PBRS are jointly used, the error becomes even lower for both CIFAR10-C (21.1%) and CIFAR100-C (47.0%), which highlight the synergy between them against temporally correlated streams.\n\n\n\n\n---\n**Q7 (minor). Equation (4), the left hand side should be u_c/L?**\n\n\nOur Equation (4) is correct - $s^2\\_{\\tilde{\\mu},c} := \\frac{\\bar{\\sigma}^2\\_c}{L}$. The sample mean, $\\tilde{\\mu}\\_{b,c}$, follows the *sampling distribution* of sample size $L$ in $\\mathcal{N}(\\bar{\\mu}, \\bar{\\sigma}^2)$ as the population, as in line 136 of our original submission. Thus, the “variance\" of the sample mean, $s^2_{\\tilde{\\mu},c}$, is driven from the variance $\\bar{\\sigma}^2$ of the distribution where it is sampled from.\n\n\n\n\n\n\n---\n**Q8 (minor). Lines 282-284, can you explain why LAME does not works well on i.i.d setup, or even worse than non-i.i.d setup?**\n\n\nLAME is a method that directly manipulates the output feature vector with laplacian optimization to achieve good performance in non-i.i.d. settings. In essence, as stated in its paper [4], what it does is that it “discourages deviations from the predictions of the pre-trained model”. Thus, it thrives in non-i.i.d scenarios where consecutive samples are correlated, which can cause the model to “overspecialize” in a narrow portion of the target stream. However, it is not helpful in i.i.d. scenarios where such an assumption does not hold, causing - quoting its paper: **“it does not noticeably help in i.i.d and class-balanced scenarios”**.\n\nWe added the explanation of why LAME does not work in the i.i.d. scenarios in the revised draft (Section 4.1).\n",
" We sincerely appreciate your effort and time in offering us thoughtful comments. We respond to each of your questions and concerns one-by-one in what follows. We also ask you to kindly refer to the *common response* we have posted together.\n\n\n---\n**Comment1. The improvement on KITTI is very marginal (as shown in Tab. 2). KITTI is the real temporally correlated dataset that motivates this work but the improvement is not significant to support the claims.**\n\nConsidering the *relative* improvement of the error with respect to the Source error, NOTE shows around **11%** improvement in KITTI (12.3%→10.9%), **18%** in HARTH (62.6%→51.0%), and **9%** in ExtraSensory (50.2%→45.4%). Thus, we believe that the improvement in KITTI is not marginal. In addition, we emphasize that most baselines (BN Stats, ONDA, PL, TENT, CoTTA) show even worse error rates after the adaptation. \n\nFurthermore, NOTE consistently shows significant improvements on other datasets (CIFAR10-C: 42.3% →21.1%; CIFAR100-C: 66.6%→ 47.0%; MNIST-C: 16.1%→7.1%, HARTH: 62.6%→51.0% , ExtraSensory: 50.2%→45.4%) while outperforming the baselines. During the rebuttal period, we conducted an additional evaluation on a large-scale robustness benchmark, ImageNet-C, and showed significant improvements (86.1%→80.6%). Please refer to our response to Reviewer 5mFV Q1 for the details of the ImageNet-C result.\n\n\n---\n**Comment2. For temporally correlated test-time adaptation task, CIFAR10-C/100-C can hardly simulate the temporal correlation. How does the temporally correlated streams look like? It is important to give illustrations or analysis of why this synthesized data stream presents temporal correlation.**\n\nWe agree that CIFAR10-C/100-C cannot simulate the genuine temporal correlation. That said, we note that most existing test-time adaptation studies adopt CIFAR10-C/100-C for their main evaluation benchmarks. To acknowledge those previous studies and present a fair comparison with them, we evaluated them under both temporally correlated and uniformly distributed scenarios for CIFAR10-C/100-C. We believe one prominent way to generate temporally correlated distributions from those datasets is to inject class-wise temporal correlation, which can be observed in the class distributions of the real datasets in Figures 1, 8, 9, and 10 in our original submission.\n",
" ---\n**Q1. Evaluation and discussion on how to generalize to transformer based backbones would further improve the quality of this work.**\n\nThank you for your suggestion. We discuss the applicability of our method to Transformer-based architecture in NLP tasks following Reviewer GmfJ who also asked a similar question to yours.\n\nWe sincerely appreciate your time and effort in providing us with positive comments. We respond to your question in what follows. We also ask you to kindly refer to the *common response* we have posted together.\n\n---\n**Comment1. In the experiments, the authors should use more tasks such as some NLP data and more popular base models like Transformer-based ones for both CV and NLP. Otherwise, it's hard to measure how the proposed methods really work in modern AI systems.**\n\nWe would like to highlight that all the SOTA baselines (BN Stats, ONDA, PL, TENT, CoTTA) except for LAME, are not applicable to Transformer-based models. Similarly, NOTE is not applicable to models without BN layers. In our original submission, we clarified and discussed this limitation in Section 6:\n> Similar to existing TTA algorithms [28, 32, 40, 43], **we assume that the backbone networks are equipped with BN layers**, and particularly, we replaced the BN layers with IABN layers. While BN is a widely-used component in deep learning, there exist architectures that do not embed BN layers such as LSTMs [13] and Transformers [38]. Whether naively inserting BN or IABN would be sufficient for applying these TTA methods is still in question. A recent study evidenced that BN is advantageous in Vision Transformers [44], showing potential room to apply our idea to architectures without BN layers. However, more in-depth studies are necessary to identify the actual applicability of BN (or IABN) to those architectures.\n\nNevertheless, the other technical component of our method, PBRS, can be implemented on transformers. To investigate the impact of PBRS on Transformer-based models for NLP tasks, we adopted the **BERT** [r1] tiny model. Specifically, PBRS was utilized to update the trainable parameters in the Layer Normalization layers of the model. We conducted test-time adaptation on common text sentiment analysis datasets, following previous work [r2]; we used **SST-2** [r3] (movie reviews; source domain) to fine-tune the BERT model and evaluated its test-time adaptation capabilities on **FineFood** [r4] (food reviews; target domain). This setup is valuable for understanding how TTA algorithms perform under the domain gap in text sentiment caused by the difference in topics. During test-time adaptation, we used the Dirichlet distribution for simulating the non-i.i.d. streams, as previously described in our experiment section. The result is presented in the table below:\n\nTable. Classification error (%) on FineFood with both temporally correlated and uniform test streams. The lower, the better. N/A refers to “not applicable.”\n| | non-i.i.d. | i.i.d. |\n|---|---:|---:|\n| Source | 37.4 ± 0.0 | 37.4 ± 0.0 |\n| BN Stats | N/A | N/A |\n| ONDA | N/A | N/A |\n| PL | N/A | N/A |\n| TENT | N/A | N/A |\n| LAME | 35.7 ± 0.3 | 41.6 ± 0.3 |\n| CoTTA | N/A | N/A |\n| **NOTE (PBRS-only)** | **34.8 ± 2.1** | **34.6 ± 2.5** |\n\nWe found that, to some extent, PBRS exhibits its effectiveness. Nevertheless, NOTE cannot fully utilize the synergy between IABN and PBRS regarding balanced statistics updates in this case. While LAME is applicable to models without BN, **LAME’s critical limitation is the performance drop in i.i.d. scenarios**, as shown not only for this particular experiment but also in our main evaluation. The primary reason is, as stated by the authors of LAME, that it “discourages deviations from the predictions of the pre-trained model,'' and thus it “does not noticeably help in i.i.d and class-balanced scenarios.”\n\nIn summary, while NOTE shows its effectiveness in **both** non-i.i.d and i.i.d. scenarios, a remaining challenge is to design an algorithm that generalizes to any architecture. We believe the findings and contributions of our work could give valuable insights to future endeavors on this end. We submitted a revised draft to incorporate this discussion.\n\n\n[r1] Devlin, Jacob, et al. \"BERT: Pre-training of deep bidirectional transformers for language understanding.\" arXiv preprint arXiv:1810.04805 2018.\n\n[r2] Moon, Seung Jun, et al. \"Masker: Masked keyword regularization for reliable text classification.\" AAAI 2021.\n\n[r3] Socher, Richard, et al. \"Recursive deep models for semantic compositionality over a sentiment treebank.\" EMNLP 2013.\n\n[r4] McAuley, et al. \"From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews.\" WWW 2013.\n",
" We sincerely appreciate your effort and time in offering us thoughtful comments. We respond to each of your questions and concerns one-by-one in what follows. We also ask you to kindly refer to the *common response* we have posted together.\n\n---\n**Comment1. NOTE has to alter training by replacing BN with IABN, and therefore cannot be applied to off-the-shelf models without re-training. The test-time adaptation setting is in part motivated by not having access to the source/training data, so this makes NOTE less of a good fit for this particular assumption.**\n\nWe agree that IABN requires retraining off-the-shelf models. However, we emphasize that the limitation is not severe due to the following reasons:\n(1) Source data is usually accessible: Practitioners typically have their own data to train the model, and thus they can use IABN instead of BN when training from scratch. In addition, popular off-the-shelf models (such as ResNet) are usually trained from public datasets. Retraining the off-the-shelf models would be trivial, given the amount of testing that follows after the deployment. \n(2) As we answer in your Q3 (please see below), simply replacing BN with IABN without re-training still shows the effectiveness of IABN under distributional shifts. \n(3) The other technical component, PBRS, does not require re-training of the model and has similar performance gain as IABN when used solely.\n\nMoreover, NOTE has advantages over the baselines. NOTE requires only a “single” instance for inference which is imperative in real-time applications such as autonomous driving, while most baselines (BN Stats, PL, TENT, LAME, and CoTTA) have to wait for a “batch” for inference. In addition, NOTE needs a single forward for inferring each sample unlike CoTTA that requires 32 forward passes for each sample due to augmentation.\n\n\n---\n**Comment2. How NOTE updates is not fully specified. While Sec. 3.2 explains that inputs are sampled from the reservoir and batched with test inputs, it does not detail the proportion of test inputs to reservoir inputs and the number of gradient steps among other such considerations.**\n\n\nIn our original submission, we explained some of the details of PBRS such as memory size, update frequency, and training epochs in Section 4, line 188-191:\n> We assume the model pre-trained with source data is available for TTA. In NOTE, we replaced BN with IABN during training. We set the test batch size as 64 and epoch as one for adaptation, which is the most common setting among the baselines [32, 4, 40]. Similarly, we set the memory size as 64 and adapt the model every 64 samples in NOTE to ensure a fair memory constraint. \n\nWe agree that our original submission lacks some necessary details of PBRS, and thus we added further details on PBRS including Q2 and Q5 in the revised draft.\n\n---\n**Comment3. The choice of datasets is not entirely justified. HARTH and ExtraSensory are indeed temporal, but at the same time they are unfamiliar for the purpose of evaluating adaptation. Are these datasets subject to shift across the given sources and targets? It would be reassuring to measure source accuracy on source and target to evidence drops and thus the presence of shift. The architectures for these datasets are likewise not standard, which would be fine, except the text does not explain how to they incorporate IABN (though this presumably follows the convolution layers).**\n\nDomain shifts in human activity recognition via sensors are indeed a well-known problem since the advent of smartphones and the ubiquitous computing paradigm. The primary cause of the domain shift is the behavioral and environmental differences between users [r1]; an elderly’s jogging might be confused with a young’s walking. In addition, the placement of mobile devices varies according to the type of device (smartphone vs. smartwatch) and user's preference (hand vs. pocket). While there exists dozens of public human activity recognition datasets, HARTH and ExtraSensory datasets are collected in a free-living environment from tens of users that naturally entails domain shift, and thus we call them “real-distribution” datasets. \n\nFollowing your suggestion, we report the source accuracy on both source and target in Q4, which shows the domain gap and the difficulty of the problem.\n\nRegarding the architecture, BN (or IABN) layers are followed by the convolutional layer. We revised the draft to add this information (Section 4.2). Thank you for pointing this out.\n\n\n[r1] Stisen, Allan, et al. \"Smart devices are different: Assessing and mitigating mobile sensing heterogeneities for activity recognition.\" Proceedings of the 13th ACM conference on embedded networked sensor systems. 2015.\n",
" ---\n**Q1. ImageNet-C in the i.i.d. and non-i.i.d. settings with ResNet-18 (and ideally ResNet-50). For the non-i.i.d. setting, consider the same proposed Dirichlet ordering, or simply ordering the data by classes.**\n\nThank you for your suggestion. We tested NOTE and the baselines on ImageNet-C that contains 15 types of corruptions for 50,000 test samples, which is a total of 750,000 test samples. We adopt the most severe level of corruption (level-5) following previous studies. We used ResNet18 and simply ordered the data by classes. We kept the hyperparameters of NOTE the same as in the other experiments in our paper. \n\nWe specify the error rate on classifying among 1,000 categories for 15 corruption types and also report the averaged error rates. Bold type indicates those of the lowest classification error. The lower, the better.:\n \n(1) Classification error (%) for ImageNet-C with temporally correlated test streams (non-i.i.d):\n| | Gauss | Shot | Impulse | Defocus | Glass | Motion | Zoom | Snow | Frost | Fog | Bright | Contrast | Elastic | Pixelate | JPEG | _Avg_ |\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:-----:|:---:|:---:|:---:|:---:|:---:|\n| Source | 98.4 | 97.7 | 98.4 | 90.6 | 92.5 | 89.9 | 81.8 | 89.5 | 85.0 | 86.3 | 51.1 | 97.2 | 85.3 | 76.9 | 71.7 | 86.2 |\n| BN Stats | 98.3 | 98.1 | 98.4 | 98.7 | 98.8 | 97.8 | 96.6 | 96.2 | 96.0 | 95.1 | 93.1 | 98.6 | 96.3 | 95.6 | 96.1 | 96.9 |\n| ONDA | 95.1 | 94.7 | 95.0 | 96.2 | 96.1 | 92.5 | 87.2 | 87.4 | 87.8 | 82.7 | 71.0 | 96.4 | 84.9 | 81.7 | 86.1 | 89.0 |\n| PL | 99.3 | 99.3 | 99.4 | 99.5 | 99.4 | 99.5 | 98.8 | 99.1 | 99.1 | 98.1 | 97.3 | 99.7 | 98.4 | 98.5 | 98.5 | 98.9 |\n| TENT | 98.3 | 98.1 | 98.4 | 98.7 | 98.8 | 97.8 | 96.6 | 96.2 | 96.0 | 95.1 | 93.2 | 98.6 | 96.3 | 95.6 | 96.1 | 96.9 |\n| LAME | 98.1 | 97.1 | 98.0 | **87.8** | **90.9** | 87.1 | **78.4** | 87.1 | 80.2 | 81.5 | **39.8** | 96.4 | 82.5 | 70.7 | **64.8** | 82.7 |\n| CoTTA | 98.1 | 98.1 | 98.3 | 98.7 | 98.8 | 97.7 | 96.8 | 96.6 | 96.2 | 95.3 | 93.5 | 98.8 | 96.5 | 95.6 | 96.3 | 97.0 |\n| NOTE | **94.6** | **93.7** | **94.5** | 91.3 | 91.1 | **83.3** | 79.1 | **79.3** | **79.0** | **66.9** | 48.4 | **94.2** | **76.3** | **61.8** | 76.6 | **80.7** |\n\n(2) Classification error (%) for ImageNet-C with uniformly distributed test streams (i.i.d):\n| | Gauss | Shot | Impulse | Defocus | Glass | Motion | Zoom | Snow | Frost | Fog | Bright | Contrast | Elastic | Pixelate | JPEG | _Avg_ |\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:-----:|:---:|:---:|:---:|:---:|:---:|\n| Source | 98.4 | 97.7 | 98.4 | 90.6 | 92.5 | 89.9 | 81.8 | 89.5 | 85.0 | 86.3 | 51.1 | 97.2 | 85.3 | 76.9 | 71.7 | 86.2 |\n| BN Stats | 89.4 | 88.6 | 89.3 | 90.8 | 89.9 | 81.3 | 69.9 | 72.6 | 73.8 | 62.6 | 44.1 | 92.1 | 64.6 | 60.4 | 70.7 | 76.0 |\n| ONDA | 89.2 | 88.2 | 89.0 | 91.0 | 90.0 | 81.5 | 69.6 | 72.6 | 73.8 | 62.6 | 44.0 | 92.1 | 64.3 | 60.1 | 70.1 | 75.9 |\n| PL | 88.5 | 86.7 | 89.6 | 92.2 | 92.1 | 82.2 | 64.4 | 69.8 | 79.4 | **55.9** | 44.0 | 97.6 | **57.8** | 52.6 | **60.4** | 74.2 |\n| TENT | 92.9 | 90.8 | 92.8 | 95.4 | 94.5 | 88.2 | 74.9 | 74.1 | 83.6 | 56.9 | 44.9 | 98.2 | 58.5 | **52.5** | 64.1 | 77.5 |\n| LAME | 98.6 | 97.8 | 98.6 | 90.7 | 92.7 | 89.9 | 81.9 | 89.9 | 85.0 | 86.5 | 51.1 | 97.3 | 85.6 | 77.1 | 71.8 | 86.3 |\n| CoTTA | **85.6** | **84.5** | **85.5** | 87.6 | 86.3 | 74.6 | **64.1** | 67.9 | 69.8 | 56.1 | **42.7** | 89.0 | 60.0 | 54.3 | 64.8 | 71.5 |\n| NOTE | 87.7 | 85.8 | 87.4 | **83.2** | **83.3** | **73.6** | 65.5 | **65.0** | **68.5** | 58.0 | 43.6 | **75.9** | 61.2 | 54.1 | 62.9 | **70.4** |\n\n\n\nAs shown, this experiment with ImageNet-C shows consistent outcomes with the experiments in our paper; (1) NOTE outperforms the baselines under temporally-correlated scenarios while most of the baselines fail to surpass the Source method. (2) NOTE also shows comparable performance (in fact, slightly better than) to the state-of-the-art baseline (CoTTA) in the i.i.d. scenario. \n\nWe appreciate your suggestion, and we believe this result would significantly improve the quality of the paper. We will include more comprehensive results, e.g., error bars with multiple runs, in our final manuscript. \n",
" ---\n**Q2. What is the full impact of PBRS on the amount of computation for prediction and adaptation? In particular, how much more time is needed per test input? Given that it is used for batching inputs, it seems that forward and backward would require linearly more time in the size of the reservoir. To put it another way, does using a reservoir of size 64 for a test input mean that inference is 64x times slower or at least requires that many more forward passes?**\n\nPBRS (and NOTE accordingly) does not incur additional forward passes during inference; it requires **only a single forward-pass** for each test sample. In addition, the number of forward/backward passes is fixed (= number of test samples) and **does not linearly increase according to the size of the memory**.\n\nThe additional computational overhead caused by PBRS is two-fold: (1) For every sample, it decides whether to add it to memory via Algorithm 1 without any additional forward/backward passes. (2) For every 64 samples(= memory size), PBRS updates only the IABN’s normalization statistics ($\\bar{\\mu}$ and $\\bar{\\sigma}^2$) via exponential moving average and affine parameters ($\\gamma$ and $\\beta$) via backward passes; thus the number of samples seen during the test time is equal to the number of samples subject to backward passes.\n\nWe emphasize NOTE is computationally efficient compared with state-of-the-art approaches that require multiple forward passes to infer a single sample, such as CoTTA.\n\nWe submitted a revised draft to reflect our answer for your comment to highlight the computational advantages of NOTE compared with the baselines.\n\n---\n**Q3. Is it possible to adopt IABN during testing without altering training? That is, can NOTE operate in a fully test-time manner without having to (re-)train the model with IABN?**\n\nWe investigated whether switching BN to IABN without re-training still leads to performance gain. IABN* refers to replacing BN with IABN during test time. \n\nCIFAR dataset in non-i.i.d settings:\n| Method | CIFAR10-C | CIFAR100-C | _Avg_ |\n|---|:---:|:---:|:---:|\n| Source | 42.3 ± 1.1 | 66.6 ± 0.1 | 54.43 |\n| **IABN*** | 27.1 ± 0.4 | 60.8 ± 0.1 | 43.98 |\n| IABN | 24.6 ± 0.6 | 54.5 ± 0.1 | 39.54 |\n| __IABN*+PBRS__ | 24.9 ± 0.2 | 55.9 ± 0.2 | 40.41 |\n| IABN+PBRS | **21.1 ± 0.6** | **47.0 ± 0.1** | **34.03** |\n\nCIFAR dataset in i.i.d settings:\n| Method | CIFAR10-C | CIFAR100-C | _Avg_ |\n|---|:---:|:---:|:---:|\n| Source | 42.3 ± 1.1 | 66.6 ± 0.1 | 54.43 |\n| **IABN*** | 27.1 ± 0.4 | 60.8 ± 0.2 | 43.98 |\n| IABN | 24.6 ± 0.6 | 54.5 ± 0.1 | 39.53 |\n| __IABN*+PBRS__ | 23.2 ± 0.4 | 55.3 ± 0.1 | 39.26 |\n| IABN+PBRS | **20.1 ± 0.5** | **46.4 ± 0.0** | **33.24** |\n\n\nWe note that IABN* still shows a significant reduction of errors under the CIFAR10-C and CIFAR100-C datasets compared with BN (Source). We interpret this as the normalization correction in IABN is valid to some extent without re-training of the model. We notice that IABN* outperforms the baselines in CIFAR10-C with 27.1% error, while the second best, LAME, showed 36.2% error. In addition, IABN* also shows improvement combined with PBRS (IABN*+PBRS). This result suggests that IABN can still be used without re-training the model.\n\nWe thank you for the suggestion, and we included these new results in the revised supplementary file (Section C.3 and Table 19), which we believe would be an interesting investigation and discussion about the possibility of skipping retraining with IABN.\n\n---\n**Q4. For the real-world datasets, what is the source accuracy on the source data? This is important to report for measuring adaptation so that the severity of the shift and how much it is mitigated can be gauged. As presented, it is hard to know if HARTH and ExtraSensory are hard problems, and if the source model is not already doing quite well.**\n\nThank you for your suggestion. We calculated the source-domain error rates of KITTI, HARTH, and ExtraSensory compared with the target domain(s).\n\n| KITTI | _Src domain_ | Rain | _Avg_ |\n|---|:---:|:---:|:---:|\n| Source | **7.4 ± 1.0** | 12.3 ± 2.3 | 9.9 |\n\n| HARTH | _Src domain_ | S008 | S018 | S019 | S021 | S022 | S028 | S029 | _Avg_ |\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| Source | **11.7 ± 0.7** | 86.2 ± 1.3 | 44.7 ± 2.1 | 50.4 ± 9.5 | 74.8 ± 3.8 | 72.0 ± 2.6 | 53.0 ± 24.0 | 57.0 ± 16.7 | 56.2 |\n\n| ExtraSensory | _Src domain_ | 4FC | 598 | 619 | 797 | A5C | C48 | D7D | _Avg_ |\n|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| Source | **8.3 ± 0.7** | 34.6 ± 2.5 | 40.1 ± 0.7 | 63.8 ± 5.7 | 45.3 ± 2.4 | 64.6 ± 3.7 | 39.6 ± 6.8 | 63.0 ± 3.9 | 44.9 |\n\nAs shown, there exists a clear gap between the source and the target domains, which demonstrates the severity of the shift. We included this result in explaining the details of the datasets in the revised supplementary material (Section C.2.2, Table 16, Table 17, and Table 18).\n\n\n",
" ---\n**Q5. How are predictions made on the first samples before the reservoir is filled? Is adaptation deferred until the reservoir is full?**\n\nNOTE **predicts each sample regardless of the memory occupancy**. As discussed in Q2, NOTE does not need to wait for a batch of samples for inference. The samples in memory are utilized only to adapt IABN parameters with balanced samples. We clarified this in the revised draft (Section 3.3).\n\n---\n**Q6. Are the KITTI, HARTH, and ExtraSensory datasets \"real distributions with domain shift\" (as in Sec. 4.2)? KITTI-Rain is real data, with a synthetic shift. HARTH and ExtraSensory are both real data, but the shift is not evident.**\n\nWe agree that it might be misleading. In the revised draft and supplementary file (Section 4.2 and C.2.2), we replaced “real distributions with domain shift” with “real-distributions with domain shift”. We intended to emphasize that the distributions are real, while domain shift is synthetic for KITTI-Rain and real for HARTH and ExtraSensory. Regarding the evidence of the shift, please refer to Q4.\n\n---\n**Q7. Please clarify the hyperparameters for IABN in Equation 5 and the definition of ψ that follows it. k is often used for integer values, but in this case it seems to be a real-valued weight. Consider other notation, such as alpha, and introducing this weighting in its own sentence following Equation 5.**\n\nIn our original submission, we explained the hyperparameter k and the definition of $\\psi$ right after Equation (5), in line 141~: “where ψ (x;λ) =...... k >= 0 is a hyperparameter….”. \n\nWe revised the draft to further clarify this (Section 3.1). In addition, we replaced $k$ with $\\alpha$ following your suggestion.\n\n\n---\n**Q8. Figure 3 tries to cover a lot, and is less accessible as a result. Part (3) on PBRS does not communicate a lot, as it just shows updated statistics, so consider dropping it in favor of focusing more on the IABN update. Consider more labeling of part (1) to express how IABN corrects BN or not.**\n\nThank you for your comment. We described the procedure of IABN better in Figure 3 and in the revised draft.\n\n---\n**Q9. In Figure 3 (2), is the time arrow reversed? I would expect the latest sample xt to be new while the discarded sample xi should be old.**\n\nThank you for pointing it out. “Old” and “New” labels in Figure 3 (2) only apply to the samples within the memory. We agree that it might be confusing. We fixed the issue in Figure 3 in the revised draft. \n\n---\n**Q10. To report failures in the temporal setting, consider highlighting results that are worse than the source model with special formatting. For example, these could be underlined, or typset in red.**\n\nWe appreciate your suggestion. We updated the values in tables with red fonts in the revised draft and supplementary file.\n",
" We sincerely appreciate your time and effort in providing us with positive comments. We respond to your question in what follows. We also ask you to kindly refer to the *common response* we have posted together.\n\n---\n**Comment1. In the experiments, the authors should use more tasks such as some NLP data and more popular base models like Transformer-based ones for both CV and NLP. Otherwise, it's hard to measure how the proposed methods really work in modern AI systems.**\n\nWe would like to highlight that all the SOTA baselines (BN Stats, ONDA, PL, TENT, CoTTA) except for LAME, are not applicable to Transformer-based models. Similarly, NOTE is not applicable to models without BN layers. In our original submission, we clarified and discussed this limitation in Section 6:\n> Similar to existing TTA algorithms [28, 32, 40, 43], **we assume that the backbone networks are equipped with BN layers**, and particularly, we replaced the BN layers with IABN layers. While BN is a widely-used component in deep learning, there exist architectures that do not embed BN layers such as LSTMs [13] and Transformers [38]. Whether naively inserting BN or IABN would be sufficient for applying these TTA methods is still in question. A recent study evidenced that BN is advantageous in Vision Transformers [44], showing potential room to apply our idea to architectures without BN layers. However, more in-depth studies are necessary to identify the actual applicability of BN (or IABN) to those architectures.\n\nNevertheless, the other technical component of our method, PBRS, can be implemented on transformers. To investigate the impact of PBRS on Transformer-based models for NLP tasks, we adopted the **BERT** [r1] tiny model. Specifically, PBRS was utilized to update the trainable parameters in the Layer Normalization layers of the model. We conducted test-time adaptation on common text sentiment analysis datasets, following previous work [r2]; we used **SST-2** [r3] (movie reviews; source domain) to fine-tune the BERT model and evaluated its test-time adaptation capabilities on **FineFood** [r4] (food reviews; target domain). This setup is valuable for understanding how TTA algorithms perform under the domain gap in text sentiment caused by the difference in topics. During test-time adaptation, we used the Dirichlet distribution for simulating the non-i.i.d. streams, as previously described in our experiment section. The result is presented in the table below:\n\nTable. Classification error (%) on FineFood with both temporally correlated and uniform test streams. The lower, the better. N/A refers to “not applicable.”\n| | non-i.i.d. | i.i.d. |\n|---|---:|---:|\n| Source | 37.4 ± 0.0 | 37.4 ± 0.0 |\n| BN Stats | N/A | N/A |\n| ONDA | N/A | N/A |\n| PL | N/A | N/A |\n| TENT | N/A | N/A |\n| LAME | 35.7 ± 0.3 | 41.6 ± 0.3 |\n| CoTTA | N/A | N/A |\n| **NOTE (PBRS-only)** | **34.8 ± 2.1** | **34.6 ± 2.5** |\n\nWe found that, to some extent, PBRS exhibits its effectiveness. Nevertheless, NOTE cannot fully utilize the synergy between IABN and PBRS regarding balanced statistics updates in this case. While LAME is applicable to models without BN, **LAME’s critical limitation is the performance drop in i.i.d. scenarios**, as shown not only for this particular experiment but also in our main evaluation. The primary reason is, as stated by the authors of LAME, that it “discourages deviations from the predictions of the pre-trained model,'' and thus it “does not noticeably help in i.i.d and class-balanced scenarios.”\n\nIn summary, while NOTE shows its effectiveness in **both** non-i.i.d and i.i.d. scenarios, a remaining challenge is to design an algorithm that generalizes to any architecture. We believe the findings and contributions of our work could give valuable insights to future endeavors on this end. We submitted a revised draft to incorporate this discussion.\n\n\n[r1] Devlin, Jacob, et al. \"BERT: Pre-training of deep bidirectional transformers for language understanding.\" arXiv preprint arXiv:1810.04805 2018.\n\n[r2] Moon, Seung Jun, et al. \"Masker: Masked keyword regularization for reliable text classification.\" AAAI 2021.\n\n[r3] Socher, Richard, et al. \"Recursive deep models for semantic compositionality over a sentiment treebank.\" EMNLP 2013.\n\n[r4] McAuley, et al. \"From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews.\" WWW 2013.\n",
" The authors present a method named NOTE to address the online adaptation for non-iid data streams without additional annotation. It has two key components: instance-aware batch norm (IABN) and prediction-balanced reservoir sampling (PBRS). The former computes the difference between learned knowledge and the current observation (i.e., the new instances from the online data stream). The latter aims to avoid overfitting towards the non-iid streams by mimicking the iid samples with a simulated memory. They evaluate the method with sota TTA baselines on a few datasets (mainly for computer vision) and show that NOTE is effective. ***Strengths***\n\n\n- The paper is comprehensive and has a nice presentation. It introduces the background knowledge very well and the proposed components have clear motivation. \n\n- The IABN and PBRS are both novel and show effectiveness over other baseline methods. \n\n- This paper studies a very important problem of test-time adaptation without supervision. It can be used for many real-world applications. \n\n***Weakness***\n- In the experiments, the authors should use more tasks such as some NLP data and more popular base models like Transformer-based ones for both CV and NLP. Otherwise, it's hard to measure how the proposed methods really work in modern AI systems. N/A Yes",
" Test-time adaptation updates the model on the test data to improve generalization to shifted data.\nWhile this sort of adaptation can help, most existing methods assume the test data arrives i.i.d., and in particular without temporal dependence.\nIn temporal settings, such as robotics or autonomous driving, this work first demonstrates the weakness of existing methods, and then proposes a fix (NOn-i.i.d TEst-time adaptation, or NOTE) that adjusts normalization statistics and maintains a reservoir/queue of test data to stabilize test-time optimization.\n\nThis fix is an extension of TENT, a test-time adaptation method that updates batch normalization layers in two steps: first, it re-estimates the normalization statistics from test batches, and second, it updates affine scale and shift parameters to minimize the entropy of model predictions.\nThe normalization part of the fix generalizes instance norm and batch norm (IABN), reducing to each as a special case of the hyperparameters, and in essence correct outliers w.r.t. the batch statistics by using the instance statistics instead (Sec. 3.1).\nThe reservoir part of the fix adopts reservoir sampling to maintain a queue that is expected to be class-balanced and time-balanced in order to simulate access to i.i.d. data in the non-i.i.d. setting.\nThe balancing is accomplished by making use of the model predictions as pseudo-labels, in place of the true labels used for continual learning, for prediction based reservoir sampling (PBRS).\nWith the reservoir samples, current test inputs can be batched with past test inputs, which stabilizes both the normalization statistics and entropy gradients for updating the affine parameters.\n\nExperiments on standard benchmarks for dataset shift (CIFAR-10/100-C), real temporal data with synthetic shift (KITTI), and lesser-known temporal datasets (HARTH, ExtraSensory) demonstrate that existing methods degrade in the temporal setting.\nMatched comparison on standard and temporal/class-ordered CIFAR-10/100-C confirm that NOTE achieves comparable accuracy in both settings, rivaling existing methods in the i.i.d. case and signifcantly improving (~50% relative error) in the temporal case.\nOn the real temporal data (KITTI, HARTH, ExtraSensory) NOTE still does as well as better, although the KITTI result is within the error bars, while HARTH and ExtraSensory are not well-established as benchmarks for shift.\nThe main components of the method, IABN and PBRS, are ablated and the sensitivity of methods to temporal dependence w.r.t. the degree of non-uniformity and batch size are analyzed.\n\nNOTE is the first work to identify the failure of test-time adaptation on temporal data and achieves state-of-the-art accuracies compared to existing methods in this setting. Strengths\n\n- Well-motivated: Temporal data is ubiquitous, and many cases that may need adaptation are indeed temporal, such as autonomous driving in anomalous weather or robotics applications in poorly-controlled conditions.\n The difficulty current methods have with temporal data is neatly summarized by Fig. 2, which shows the stark contrast between the i.i.d. and non-i.i.d. settings on a simple dataset (CIFAR-10-C).\n- Clear illustrations: Fig. 1 shows two varieties of temporal data, with inputs and labels, where both clearly show the type of temporal correlation higlighted by this work.\n- Sound experimental design on standard datasets: The use of CIFAR-10/100-C is standard and serves as a sanity check of the method in the established setting. The application of the Dirichlet distribution for sampling more dependent and less dependent orderings of the data according to its concentration hyperparameter is appropriate and effective for sweeping across degrees of dependence.\n- Strong enough results: NOTE rivals existing methods in the existing setting, and does far better in the temporal setting it focuses on.\n- Baselines: The experimental comparisons include the basic method, TENT, which is extended by NOTE, alongside the recent and strongest methods such as LAME and CoTTA which were only recently published at CVPR'22.\n- Consistency: The same hyperparameters, such as the threshold for IABN, are used across experiments\n\nWeaknesses\n\n- NOTE has to alter training by replacing BN with IABN, and therefore cannot be applied to off-the-shelf models without re-training.\n The test-time adaptation setting is in part motivated by not having access to the source/training data, so this makes NOTE less of a good fit for this particular assumption.\n- There is no large-scale evaluation in terms of dataset or model. For data, one would expect results on ImageNet-C, which serves as a gold standard benchmark of robustness to natural shifts like corruptions. For models, ResNet-50 is a common choice among test-time adaptation methods, including those compared to in this work.\n- How NOTE updates is not fully specified. While Sec. 3.2 explains that inputs are sampled from the reservoir and batched with test inputs, it does not detail the proportion of test inputs to reservoir inputs and the number of gradient steps among other such considerations.\n- The choice of datasets is not entirely justified. HARTH and ExtraSensory are indeed temporal, but at the same time they are unfamiliar for the purpose of evaluating adaptation. Are these datasets subject to shift across the given sources and targets? It would be reassuring to measure source accuracy on source and target to evidence drops and thus the presence of shift.\n The architectures for these datasets are likewise not standard, which would be fine, except the text does not explain how to they incorporate IANB (though this presumably follows the convolution layers). Questions\n\n- How does NOTE perform on ImageNet-C in the i.i.d. and non-i.i.d. settings with ResNet-18 (and ideally ResNet-50)? For the non-i.i.d. setting, consider the same proposed dirichlet ordering, or simply ordering the data by classes.\n- What is the full impact of PBRS on the amount of computation for prediction and adaptation? In particular, how much more time is needed per test input? Given that it is used for batching inputs, it seems that forward and backward would require linearly more time in the size of the reservoir. To put it another way, does using a reservoir of size 64 for a test input mean that inference is 64x times slower or at least requires that many more forward passes?\n- Is it possible to adopt IABN during testing without altering training? That is, can NOTE operate in a fully test-time manner without having to (re-)train the model with IABN?\n- For the real-world datasets, what is the source accuracy on the source data? This is important to report for measuring adaptation so that the severity of the shift and how much it is mitigated can be gauged. As presented, it is hard to know if HARTH and ExtraSensory are hard problems, and if the source model is not already doing quite well.\n- (Minor) How are predictions made on the first samples before the reservoir is filled? Is adaptation deferred until the reservoir is full?\n\nOther Feedback\n\n- Are the KITTI, HARTH, and ExtraSensory datasets \"real distributions with domain shift\" (as in Sec. 4.2)? KITTI-Rain is real data, with a synthetic shift. HARTH and ExtraSensory are both real data, but the shift is not evident.\n- Please clarify the hyperparameters for IABN in Equation 5 and the definition of $\\psi$ that follows it. $k$ is often used for integer values, but in this case it seems to be a real-valued weight. Consider other notation, such as $alpha$, and introducing this weighting in its own sentence following Equation 5.\n- Figure 3 tries to cover a lot, and is less accessible as a result. Part (3) on PBRS does not communicate a lot, as it just shows updated statistics, so consider dropping it in favor of focusing more on the IABN update. Consider more labeling of part (1) to express how IABN corrects BN or not.\n- In Figure 3 (2), is the time arrow reversed? I would expect the latest sample $x_t$ to be new while the discarded sample $x_i$ should be old.\n- To report failures in the temporal setting, consider highlighting results that are worse than the source model with special formatting. For example, these could be underlined, or typset in red. The limitations covered are thoughtful, in particular when it comes to the potential to amplify bias in testing data. The reliance on batch normalization layers, at least as empirically examined, is honestly discussed. While this discussion is adequate, it would be further improved by underlining the need to alter training, in substituting BN with IABN, as other methods are agnostic to training (like TENT and BN).",
" Temporal correlated data violates the i.i.d. assumption and the existing test-time adaptation methods are prone to overfitting under high correlated test data. This work proposed two components, IABN and PBRS, to tackle the temporal correlation. IABN only updates BN parameters when test data distribution is sufficiently different and PBRS resamples data samples for calculating test data batch statistics. Experiments are carried out on simulated temporal correlated data and real datasets. Strength:\n\n1. Temporal correlation is a practical concern in test-time training. Addressing the challenges in TTA for temporal correlated data is important.\n \n2. Experiments are extensive, covering both visual and audio modalities.\n\nWeakness:\n\n3. The improvement on KITTI is very marginal (as shown in Tab. 2). KITTI is the real temporally correlated dataset that motivates this work but the improvement is not significant to support the claims.\n\n4. For temporally correlated test-time adaptation task, CIFAR10-C/100-C can hardly simulate the temporal correlation. How does the temporally correlated streams look like? It is important to give illustrations or analysis of why this synthesized data stream presents temporal correlation.\n\n5. Adjusting Batchnorm has limitation to certain backbone networks. For example, ViT (transformer) does not have batchnorm layers, how does this approach apply to transformer backbones where BN is missing?\n Evaluation and discussion on how to generalize to transformer based backbones would further improve the quality of this work.\n\nFor streaming test, it would be interesting to report the real-time accuracy which illustrates the real adaptation power in test-on-stream manner. Yes.",
" Test-time training is an emerging and promising approach for building robust machine learning models to distribute shifts. This paper claims that most existing TTA methods assume that the test samples come from i.i.d distribution, but it does not hold for many practical applications of TTA, such as self-driving, human activity recognition, etc. This paper then proposes two new techniques to adapt the model under non-i.i.d setup, namely Instance-Aware Batch Normalization and Prediction-Balanced Reservoir Sampling (PBRS). Experimental results on synthesized non-i.i.d setup and realistic non-i.i.d setup show that the proposed method significantly outperformed the existing methods. \n ---\n**Strengths**\n\n(1) Test-time adaptation is a timely and important topic. This paper pointed out important issues of existing TTA methods. \n\n(2) This paper validates the effectiveness of the proposed method on both synthetic and realistic benchmark tasks of non-i.i.d adaptation setup. \n\n---\n**Weaknesses**\n\nOverall, I think this paper provides a good contribution for the test-time adaptation research, with extensive experiments. However, I found several important details lacking in current manuscripts, including the detail of the proposed method (especially IABN part), explanation about why the proposed method should work better than prior works, and empirical validations. See questions for the detailed comments. \n\nMinor comments/questions:\n- Equation (4), the left hand side should be u_c/L?\n- Lines 282-284, can you explain why LAME does not works well on i.i.d setup, or even worse than non-i.i.d setup?\n (1) Why do we need equation (4) and equation (5)? What happens if we always u_{b, c} to compute outputs? \n\n(2) What is the final output of IABN layer?\n\n(3) Regarding the exponential moving average presented in 3.2, do you stop gradient for the past mean and sigma? \n\n(4) Why do we need lines 9-17 for PBRS? In other words, why don’t you randomly discard one instance in memory even if y_t \\in L? \n\n(5) In table 1, why does the value for Source differ between i.i.d setup and non-i.i.d setup? \n\n(6) If I correctly understood the paper, PBRS can be incorporated with any prior studies, e.g., TENT, PL. Why did you choose to combine it with IABN? In other words, is IABN itself perform better than existing methods? \n This paper adequately addressed the limitation and potential negative social impact. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"v4PIY92_5eA",
"1SGzegtgiOu",
"sz1muyMVHE-",
"qWPYZ3e6xxy",
"eD_gavTXdQ4",
"OygYPlYaWop",
"nips_2022_E9HNxrCFZPV",
"nips_2022_E9HNxrCFZPV",
"cvGIFr7ETV-",
"cvGIFr7ETV-",
"xl8by_jGVvt",
"xl8by_jGVvt",
"-dw4WhZ8FFi",
"-dw4WhZ8FFi",
"-dw4WhZ8FFi",
"-dw4WhZ8FFi",
"pLhU1eCRXO",
"nips_2022_E9HNxrCFZPV",
"nips_2022_E9HNxrCFZPV",
"nips_2022_E9HNxrCFZPV",
"nips_2022_E9HNxrCFZPV"
] |
nips_2022_tZUOiVGO6jN | A Deep Learning Dataloader with Shared Data Preparation | Executing a family of Deep Neural Networks (DNNs) training jobs on the same or similar datasets in parallel is typical in current deep learning scenarios. It is time-consuming and resource-intensive because each job repetitively prepares (i.e., loads and preprocesses) the data independently, causing redundant consumption of I/O and computations. Although the page cache or a centralized cache component can alleviate the redundancies by reusing the data prep work, each job's data sampled uniformly at random presents a low sampling locality in the shared dataset that causes the heavy cache thrashing. Prior work tries to solve the problem by enforcing all training jobs iterating over the dataset in the same order and requesting each data in lockstep, leading to strong constraints: all jobs must have the same dataset and run simultaneously. In this paper, we propose a dependent sampling algorithm (DSA) and domain-specific cache policy to relax the constraints. Besides, a novel tree data structure is designed to efficiently implement DSA. Based on the proposed technologies, we implemented a prototype system, named Joader, which can share data prep work as long as the datasets share partially. We evaluate the proposed Joader in practical scenarios, showing a greater versatility and superiority over training speed improvement (up to 500% in ResNet18). | Accept | The paper proposes a new data loader called Joader for parallel DNN training on overlapped datasets that allows tasks to share memory and computational resources for data preprocessing. Joader implements a new sampling mechanism and cache policy to reduce cache misses due to data access from multiple tasks and a new data structure to facilitate the implementation. Joader has been integrated with PyTorch and shown to be very effective in practice.
All reviewers agree that the paper makes a valuable contribution to the NeurIPS community and I agree with Reviewer 6QhD that the contribution stands even if the system is not covering distributed workloads. I thus recommend acceptance.
However, for the camera ready version I think it is important that the authors incorporate additional discussion about potential limitations of their approach so that the tool can be used most effectively by researchers. For example, potential overheads in the implementation of the new data structure should be discussed even if they are small in the provided experiments and also the fact that sampling across jobs is now correlated and no longer independent is important to emphasize together with potential implications for the resulting workloads (e.g. can the results of the parallel runs still be used to achieve variance reduction by ensembles?). Beyond that I think the additional experiments to break down the performance gains into individual components is valuable and clarification made in the conversation with the reviewers should be incorporated in the paper. | train | [
"IOE6P2K4cuC",
"OGJagwA6x23",
"i7p79qnkypz",
"YYcP_DZSHy",
"A718UuM2IBy",
"ZWz6paCV0M",
"wRxUIjqlKPI",
"UYpm88oAhc"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I totally understand it is hard to prove the optimality in the N-job case in such a practical scenario. The evaluation in the paper and your comments convinced me that the contributions of the DSA's current version are enough for the community. Measuring the degree of breaking the strict correlation among jobs may also be a good direction to study in the future.\n\nI also read the replies about the distributed scenarios to the other reviewers. This paper reveals an important problem, and proposes an algorithm with the specific data structure and Cache policy to deal with the problem. The theoretical proofs for the algorithm, and the evaluation for the implemented Joader show the superiority. Regarding the algorithm, I think this paper almost resolves the problem in a classic scenario (i.e., on a server). The implemented Joader is more like a prototype showing the algorithm's performance. I agree that the distributed scenario is important and deserves further study, but implementing the algorithm for a distributed system is another story. It may not be a weakness of this paper.\n\nThus, I would like to champion this paper.",
" Dear reviewers,\n\nWe thank all the reviewers' detailed and constructive comments and concerns. Since only a few days are left in the author-reviewer discussion, we hope our responses address the reviewers' concerns adequately. In case we miss anything in our responses, please feel free to leave any further comments, concerns, or suggestions to us. We are very happy to discuss and answer further questions here.\n\nBest",
" **Weakness_1&Question:**\n\nEssentially, we choose the ResNet family although our method is independent of the model structure, and our proposed method is only related to the model training speed (the training speed varies depending on the number of ResNet layers). To further demonstrate the versatility of our method, we have conducted additional evaluations on the other sets of models. For each model, we run six workloads in parallel and compare Joader with the baselines. The results show that our Joader consistently outperforms the baseline when multiple jobs are running in parallel (see below).\n\n|model|time in baseline(min)|time in Joader(min)|\n|:-|-:|-:|\n|**AlexNet**|72.68671|40.01698|\n|**SqueezeNet**|71.99047|40.59335|\n|**ShuffleNet**|82.02182|47.65184|\n|**MobileNet_v2**|75.24602|42.39433|\n|**MobileNet_v3_small**|63.79987|38.60851|\n\n**Weakness_2:** \n\nJoader for distributed training is another important use case that requires an independent future study.\nIn this paper, our goal is to propose the sampling algorithm with a specific data structure, which is independent of the system implementation. In this regard, evaluation on a single machine is sufficient to verify the performance of our algorithm. The implementation and evaluation of distributed systems add extra complications. It requires additional design on data assignment and communication in a distributed manner, which are not the focus of the sampling algorithm. So we did not implement and evaluate Joader in a distributed system when we submitted this paper. However, we could provide some discussions about the guidance of implementation when training models in terms of data parallelism. In a distributed environment, each node runs a Joader, and we select one Joader as the leader which is responsible for sampling globally in DSA. The leader is in charge of dispatching the data indices to each follower Joader, while these followers can maintain the cache and load the data from their local disks.",
" **Weakness_1:** \n\nFor the description of the N-job case, we will revise it to describe the algorithm more clearly and concisely.\n\n**Weakness_2 & Question_2:** \n\nFor the optimality in the N-job case, we can indeed get the optimum at each step (a loop in pseudo-code, i.e., line 5 to line 14 or a level in Figure3).\n\nProof:\n\nAssume there are n jobs $\\{J_1,...,J_n\\}$ training upon n datasets $\\{D_1,…,D_n\\}$.\n\nIn each step, we have proved that for $k$-th job in each loop, the probability of choosing intersection $I$ is \n\n\n$$\np =\\frac{I}{D_k}\n$$\n\nin Lemma1 of Appendix C.3. The probability is obviously optimal because it is equal to the probability that $J_k$ randomly chooses intersection $I$ directly.\n\nHowever, the global optimal probability is associated with the number of jobs, the sizes of the intersections, the sizes of the datasets, and the shape of the sampling tree (left-deep tree or bushy tree). Considering the algorithm is a greedy strategy, it is hard to prove the global optimality for now. Nevertheless, we believe the proposed method with the optimality for the 2-job case and the superior efficacy in practice could contribute to the community and is worthy of further exploration. \n\n**Question_1:** For the tradeoff between strict correlation and randomness, the goal of this paper is to efficiently load the data while retaining randomness for each job. So the proposed method will get a strict correlation for the jobs with the same datasets. The balance between strict correlation and randomness among jobs could be a good future direction. On our conjecture, if the randomness could be loosened to some degree (which could be measured), the sampling performance could be further improved with a guarantee in terms of the metric of the degree.",
" **Weakness_1:** \n\nThe reviewer asked about the extra overhead of our method, especially when the number of training jobs increases. In Joader, the time complexity of dataset operations, i.e., sampling, deletion, and insertion, are O(1), O(1), and O(|D|), respectively, for each dataset. Since the dependent sampling tree manages the **index** of each input rather than the input itself, executing the operations is extremely fast. For implementation, we use a bitmap to represent the dataset in our dependent sampling tree, which is efficient for these operations. In practice, the operations only count up to a minor fraction of the time consumption of the total data preparation process. \n\nTo evaluate the time cost of dataset operations, we have conducted an additional experiment by randomly inserting 128 datasets into our system. The size of each dataset is between 1,000,000 and 2,000,000 elements (note that ImageNet contains 1,400,000 images). The average cost of inserting each dataset is 0.57 seconds. And the time cost of the operations remains nearly constant as the number of datasets increases. \n\nDeletion and sampling are also efficient in our algorithm. Sampling takes an average of 0.000054 seconds for each element, and deletion takes an average of 0.00000105 seconds. We will add the corresponding description and experiment in our rebuttal revision.\n\n**Weakness_2:** \n\nFor our motivation, we claim that the scenarios of training multiple tasks on overlapped datasets on the same machine are actually common. We briefly clarify our motivation as follows.\n\nTraining multiple models in parallel are typical and practical in HyperParameter Search (HPS) [1,3] and Neural Architecture Search (NAS) [2,4]. Models with different architectures or HyperParameters may lead to different training speeds. No matter the traversal orders on the same dataset, different training speeds make the remaining subsets in an epoch overlap at every moment. Meanwhile, many existing HPS and NAS work runs on a single server with multi-GPUs [5]. The above scenarios motivate us to propose the algorithm for efficient data preparation and accelerating research and AI application development.\n\n**Question_1:** \n\nSorry for the unclear description of the synchronous case. We clarify the term “synchronous case” in our evaluation. That is, all jobs of Joader and CoorDL[1] train the same model on the same dataset, load data in the same order and start at the same time. We try to make fair comparison by forcing the jobs training on the GPUs of the same version to get the theoretically same training speeds. \n\nCoorDL used a straightforward yet effective strategy. The reason for its performance can be found in the CoorDL paper (Section 6.3, [1]), quoted below:\n\n“Each job in the HP search operates on the same data; hence, instead of accessing data independently for each job, they can be coordinated to fetch and prep the dataset exactly once per epoch. Each epoch is completed in a synchronized fashion by all HP jobs; as a result, preprocessed mini-batches created by one job can be reused by all concurrent jobs.” \n\nWe implemented the above method (CoorDL) and compared it with our DSA. However, the DSA is naturally asynchronous. Thus training speeds of the jobs would be inevitably affected by the minor performance distinction among GPUs and the concurrency control of the operating system. Notice that CoorDL forces the jobs’ executions at the same pace. If we add such synchronous control to DSA, it will perform exactly the same as CoorDL.\n\n**RefCnt policy**: \n\nRefCnt is effective and important. In our revision, the results in Figure 12 show that RefCnt significantly improves performance. Compared to the classical cache policy, the RefCnt policy can reduce up to 50% of cache misses when the cache can hold half of the dataset.\n\n[1] Mohan, Jayashree, Amar Phanishayee, Ashish Raniwala, and Vijay Chidambaram. \"Analyzing and Mitigating Data Stalls in DNN Training.” *Proc. VLDB Endow.* 14, no. 5 (2021): 771–84. \n\n[2]Elsken, Thomas, Jan Hendrik Metzen, and Frank Hutter. \"Neural architecture search: A survey.\" *The Journal of Machine Learning Research* 20, no. 1 (2019): 1997-2017.\n\n[3]Feurer, Matthias, and Frank Hutter. \"Hyperparameter optimization.\" In *Automated machine learning*, pp. 3-33. Springer, Cham, 2019.\n\n[4]Liu, Chenxi, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. \"Progressive neural architecture search.\" In *Proceedings of the European conference on computer vision (ECCV)*, pp. 19-34. 2018.\n\n[5]Ben-Nun, Tal, and Torsten Hoefler. \"Demystifying parallel and distributed deep learning: An in-depth concurrency analysis.\" *ACM Computing Surveys (CSUR)* 52, no. 4 (2019): 1-43.",
" To efficiently perform multi-task training jobs on overlapped datasets, the authors propose a new data loading method, focusing on data preparation. They design a dependent sampling algorithm(DSA) and domain-specific cache policy to maximize the locality. Moreover, a novel tree data structure is constructed to efficiently implement DSA. The experiments show a greater versatility and superiority of training speed improvement without affecting accuracy.\n Strength:\n* Propose a new data loading method for training multiple parallel jobs on overlapped datasets \n* Propose a dependent sampling algorithm (DSA) to maximize the sampling locality while ensuring correlated randomness \n* Design a tree-based data structure for efficient cache policy implementation \n* The experiment results are convincing. Joader reduces end2end multitask training time while keeping CPU utilization low.\n\nWeakness:\n* To maximize the sampling locality, the authors need extra operations, including dividing the datasets into subsets and calculating the probabilities and conditional probabilities. With the number of training jobs increasing, the overhead could be a problem and they didn't mention it. \n* The motivation is weak. It's a rare case that researchers will train multiple tasks on overlapped datasets on the same machine. What technologies do they use to make CoorDL achieve better performance than Joader in synchronous cases? See above. \n\nOverall, the being solved problem is a significant problem. The approach used in this work looks new. However, the experimentation should be improved. RefCnt Cache policy looks trivial.",
" The authors revealed an interesting and practical problem in training parallel DNNs. The redundant consumption of I/O and data preparation, which seems not as crucial as computation in GPU, affects the DNN training speed, especially when multiple DNNs are training parallelly in a GPU server. To solve this problem, the authors proposed a new data loading method for efficiently training parallel DNNs. The method includes a data sampling algorithm to increase the sampling locality with guaranteed randomness. For practical usage, a data structure-dependent sampling tree and a specific cache. Altogether, a prototype Joader (integrated with PyTorch) is implemented to show the extremely fast performance in training DNNs parallelly. Strengths:\n1.\tThe problem is practical and essential to society. The novelty of the method is clear enough. The method keeps the characteristic of random sampling for each DNN training job, and reduces the redundant consumption of I/O and data preparation.\n2.\tThe theoretical proof guarantees the correctness of the algorithm, and the algorithm is proved to reach global optimal in the two-job case.\n3.\tThe authors implemented a prototype Joader, which is integrated with PyTorch. The details of the system design and implementation are provided in Appendix.\n4.\tThe evaluation results clearly show the superiority of training speed improvement from implemented Joader.\n\n\nWeaknesses\n1.\tThe description of the N-job case is a little bit complicated. After carefully reading the operations of the dependent sampling tree and the pseudo-code of DSA, I could finally understand the N-job case.\n2.\tThe authors proved the optimality in the two-job case, which is very important for this problem theoretically. The algorithm in the N-job case seems to be a greedy algorithm. It is better that the authors could give proof of the optimality.\n 1.\t If I understand clearly, the proposed method totally controls to sample the same data when the datasets are the same. Compared to the classic sampling in random, is there a tradeoff between strict correlation and randomness? \n2.\tIs it possible to give the proof to the optimality in N-job case?\n N.A.",
" This paper presents Joader, a data loading system optimized for the scenario where multiple training jobs share overlapped sources of data and data preprocessing. The proposed system overcomes the constraints where training jobs could vary in training speed which causes cache thrashing.\n Strength\n1) The proposed idea, dependent sampling algorithm (DSA), is novel and effective with good theoretical guarantee, while providing flexibility in varying speed across training jobs, free starting/stopping of jobs, partitial overlapping;\n2) The engineering contribution that integrates Joader into PyTorch and enables distributed training is highly convenient for downstream users of this research;\n3) Preliminary experiments demonstrate that this approach is effective.\n\nWeaknesses:\n1) The evaluation is only conducted on a single workload or a single series of workloads (ResNet).\n2) Distributed training does not exist in experiment setting, while it is indeed an extremely important usecase considering multiple overlapped training jobs in a cluster. Appendix D.4 does briefly mention that there is no much change to make Joader work for distributed training, but no further experiments are conducted.\n Is there any specific reason that Joader only uses ResNet series in experiments? While image classification is a quite standard task to test against, it would be more convincing to justify with data that it works with diverse set of models.\n While the proposed system is general and less restrictive comparing with previous work, the experiments are set to specifically justify the claims in previous sections, the reviewer believes that it would potentially attract more users if there is at least one experiment for Joader to focus on distributed training setting.\n"
] | [
-1,
-1,
-1,
-1,
-1,
5,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
"YYcP_DZSHy",
"nips_2022_tZUOiVGO6jN",
"UYpm88oAhc",
"wRxUIjqlKPI",
"ZWz6paCV0M",
"nips_2022_tZUOiVGO6jN",
"nips_2022_tZUOiVGO6jN",
"nips_2022_tZUOiVGO6jN"
] |
nips_2022_gE_vt-w4LhL | Squeezeformer: An Efficient Transformer for Automatic Speech Recognition | The recently proposed Conformer model has become the de facto backbone model for various downstream speech tasks based on its hybrid attention-convolution architecture that captures both local and global features. However, through a series of systematic studies, we find that the Conformer architecture’s design choices are not optimal. After re-examining the design choices for both the macro and micro-architecture of Conformer, we propose Squeezeformer which consistently outperforms the state-of-the-art ASR models under the same training schemes. In particular, for the macro-architecture, Squeezeformer incorporates (i) the Temporal U-Net structure which reduces the cost of the multi-head attention modules on long sequences, and (ii) a simpler block structure of multi-head attention or convolution modules followed up by feed-forward module instead of the Macaron structure proposed in Conformer. Furthermore, for the micro-architecture, Squeezeformer (i) simplifies the activations in the convolutional block, (ii) removes redundant Layer Normalization operations, and (iii) incorporates an efficient depthwise down-sampling layer to efficiently sub-sample the input signal. Squeezeformer achieves state-of-the-art results of 7.5%, 6.5%, and 6.0% word-error-rate (WER) on LibriSpeech test-other without external language models, which are 3.1%, 1.4%, and 0.6% better than Conformer-CTC with the same number of FLOPs. Our code is open-sourced and available online. | Accept | The paper conducts thorough analysis of the Conformer architecture and brings insights and techniques from other fields to simplify and improve the model structure, which is also demonstrated to show nice gains. Though as pointed by reviewers the novelty is limited, the study is very useful to the field. | val | [
"MU30AZIGMOX",
"4YHdvtFvBTG",
"UQZEbgD8lQ_",
"JZYRysqiHr6",
"pRWdbijega",
"8iQjAsRLOOV",
"NErDY0uGrY",
"FmnBs-qWysX",
"SnVFeNMHsIv",
"cbOnvcF0N9G",
"FOkHfvj5juH",
"FpJmwrxLAdb",
"Af5A0CL5wPo",
"nxY2UXJXGnC"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the final round of edits; I believe the presentation (once Figure 1 is also iterated on) is now fair and very clear. I also appreciate the authors' extensive insights and experiments: here, showing the improved blank/non-blank token repetitions of Squeezeformer-CTC over Conformer-CTC, and to the other reviewers (continuing their paper's trend of precise ablations).\n\n**I increase my soundness and presentation scores from 3/4 to 4/4, and my score from 6/10 to 8/10.** This paper is an exemplar of what proposals of new deep-learning architectures should look like.",
" We appreciate the reviewer’s feedback that helps us improve the paper. We acknowledge the reviewer’s concerns and clarified them in our revised version of the paper (marked blue). \n\n> **Q1.** I agree it is reasonable that you did not run Transducer experiments. I would like to see this early, as a phrase in the abstract and/or an extra sentence in L41 like \"Furthermore, the Conformer architecture was optimized for the ASR-specific Transducer model, but has been adopted with less critique to encoder-only schemes like CTC or self-supervised pretraining.\"\n\nWe brought this point upfront in **L31** to help readers notice that our main focus is on the encoder-only Conformer with CTC decoder.\n\n> **Q2-1.** perhaps a sentence in L167 acknowledging the paradox of reduced temporal dependency at the final layer still improving CTC would help.\n\nAlthough it is not added in the revised paper due to the page limit, we will add the following sentence in **L169** of the final version of the paper:\n\nWhile a certain amount of redundancy might help cluster subsequent time frames into the same label, our observation demonstrates that the large amount of time redundancy as in Conformer is unnecessary for successful decoding, and by avoiding this, a better efficiency-accuracy tradeoff can be achieved.\n",
" > **Q2-2.** I also mentioned it in case authors want to do a quick inspection of outputs (maybe Squeezeformer has fewer character deletions thanks to this? or now exhibits alternating behaviors like H_H_H_ that could be fixed -- many CTC-related algorithms rely on behaviors of the blank token). I do see Squeezeformer gaining wide adoption for encoder-only speech applications, so understanding such behaviors would help others down the line.\n\nThis is an excellent question. We did some empirical analysis and found that the reduced cosine similarity is **not** because Squeezeformer is adding extra blank characters. In more detail, to better understand the impact of Squeezeformer to CTC decoding, we conducted several analyses on output characters that the CTC decoder produces. For this, we measured the number of\n\n1. blank tokens (e.g., ‘_’),\n2. repeating non–blank tokens (e.g., ‘aa’, ‘bb’),\n3. transition from a non-blank token to a non-blank token (e.g., ‘aa’, ‘bb’, ‘ab’), and\n4. transition from a non-blank token to a blank token (e.g., ‘a_’, ‘b_’)\n\nfor the Conformer and Squeezeformer family on LibriSpeech test-clean and test-other datasets. The percentage of these numbers with respect to the total character counts on the entire dataset is presented in the tables below (nb: non-blank token, b: blank token).\n\n| test-clean | % b | % repeating nb | % nb→nb | % nb→b |\n| ------------------- | ----- | -------------- | ------- | ------ |\n| Conformer-S | 64.00 | 4.61 | 12.32 | 23.67 |\n| Conformer-M | 63.82 | 4.77 | 12.41 | 23.76 |\n| Conformer-L | 65.30 | 3.29 | 9.97 | 24.72 |\n| **Conformer (avg)** | 64.37 | 4.22 | 11.56 | 24.05 |\n| Squeezeformer-XS | 61.93 | 6.69 | 15.46 | 22.60 |\n| Squeezeformer-S | 60.65 | 7.96 | 17.74 | 21.61 |\n| Squeezeformer-SM | 60.61 | 7.99 | 17.63 | 21.75 |\n| Squeezeformer-M | 60.58 | 8.01 | 17.98 | 21.44 |\n| Squeezeformer-ML | 60.78 | 7.81 | 17.08 | 22.13 |\n| Squeezeformer-L | 60.29 | 8.32 | 17.94 | 21.77 |\n| **Squeezeformer (avg)** | 60.80 | 7.79 | 17.30 | 21.88 |\n\n\n| test-other | % b | % repeating nb | % nb→nb | % nb→b |\n| ------------------- | ----- | -------------- | ------- | ------ |\n| Conformer-S| 65.56 | 4.03| 11.74 | 22.69 |\n| Conformer-M| 65.29 | 4.25| 11.99 | 22.71 |\n| Conformer-L| 66.58 | 2.93| 9.77 | 23.65 |\n| **Conformer (avg)** | 65.81 | 3.73| 11.16 | 23.01 |\n| Squeezeformer-XS | 63.63 | 5.98| 14.70 | 21.66 |\n| Squeezeformer-S | 62.42 | 7.10| 16.75 | 20.82 |\n| Squeezeformer-SM | 62.35 | 7.15| 16.73 | 20.91 |\n| Squeezeformer-M | 62.31 | 7.17| 17.06 | 20.63 |\n| Squeezeformer-ML | 62.35 | 6.96| 16.11 | 21.35 |\n| Squeezeformer-L | 62.08 | 7.41| 16.92 | 21.00 |\n| **Squeezeformer (avg)** | 62.52 | 6.96| 16.37 | 21.06 |\n\nFor the test-clean dataset, Squeezeformer produced fewer blank tokens (3.5% less on average) compared to Conformer $\\textit{across all}$ models in the same model family. In addition, Squeezeformer shows more transitions from a non-blank token to another non-blank token (5.74% more on average) as well as a larger number of repeating tokens (3.57% higher on average). On the contrary, we see a decrease in the number of non-blank to blank token transitions (2.17% less on average). We observed an identical trend with the test-other dataset. \n\nIn summary, Squeezeformer does not tend to produce extra blank tokens as compared to Conformer, and thus the reason for the increased cosine similarity is not because of producing extra blank tokens. \n\n> **Q3.** understood. As I (and presumably many speech folks reading this paper) are less familiar with hardware details, you could replace L192 with a more self-contained example, such as: \"...multiple activations complicates hardware deployment; e.g., on low-end edge devices with no dedicated vector processing units, supporting additional non-linear operations requires additional lookup tables or advanced algorithms\"\n\nThis is a fair point. We added **L195** to further elaborate on this point.\n\n> **Q4.** I understand now that you are only comparing CTC-style models in Tbale 3. it would then help to write \"Transformer-CTC\" / \"Self Attention-CTC\" and \"Eff. Conformer-CTC\" in its caption and model list, as well as \"state-of-the-art CTC models for ASR\". You can see how \"WER (%) comparison on LibriSpeech ... for Squeezeformer ..., Transformer, and Efficient-Conformer\" was misleading as the latter two can be read as the encoder-decoder Transformer and the Transducer-style Efficient Conformer, which as you now note in Footnote 2 are both known to be stronger than CTC, which is why the table made me skeptical.\n\nWe clarified these in the caption and the entries of Table 3.\n\n\nFinally, we appreciate the reviewer’s comment on the presentation concern regarding Figure 1. We are investigating the best practice for visualization.",
" Re: **Q1**, I agree it is reasonable that you did not run Transducer experiments. I would like to see this early, as a phrase in the abstract and/or an extra sentence in L41 like \"Furthermore, the Conformer architecture was optimized for the ASR-specific Transducer model, but has been adopted with less critique to encoder-only schemes like CTC or self-supervised pretraining.\"\n\nRe: **Q2**, perhaps a sentence in L167 acknowledging the paradox of reduced temporal dependency at the final layer still improving CTC would help. I also mentioned it in case authors want to do a quick inspection of outputs (maybe Squeezeformer has fewer character deletions thanks to this? or now exhibits alternating behaviors like H_H_H_ that could be fixed -- many CTC-related algorithms rely on behaviors of the blank token). I do see Squeezeformer gaining wide adoption for encoder-only speech applications, so understanding such behaviors would help others down the line.\n\nRe: **Q3**, understood. As I (and presumably many speech folks reading this paper) are less familiar with hardware details, you could replace L192 with a more self-contained example, such as: \"...multiple activations complicates hardware deployment; e.g., on low-end edge devices with no dedicated vector processing units, supporting additional non-linear operations requires additional lookup tables or advanced algorithms\"\n\nFinally, I reiterate my concern re: \"flipping the WER y-axis\" in Figs. 1 and 2, and how it flips directionality of WER and GFLOPs in Fig. 1. (Please give your readers more credit!)",
" Re: **Q4**, I understand now that you are only comparing CTC-style models in Tbale 3. it would then help to write \"Transformer-CTC\" / \"Self Attention-CTC\" and \"Eff. Conformer-CTC\" in its caption and model list, as well as \"state-of-the-art CTC models for ASR\". You can see how \"WER (%) comparison on LibriSpeech ... for Squeezeformer ..., Transformer, and Efficient-Conformer\" was misleading as the latter two can be read as the encoder-decoder Transformer and the Transducer-style Efficient Conformer, which as you now note in Footnote 2 are both known to be stronger than CTC, which is why the table made me skeptical.\n\nRe: **Q5** I can accept that the result is empirical.\n\nGood ideas raised re: **Q6**, and understood re: **Q7**.",
" We appreciate all the reviewers for taking the time to review our work, and providing us with their valuable feedback.\nWe provided responses to the questions that each of the reviewers has commented on, and uploaded the revised version of the paper and supplementary material with the typos and minor/major fixes addressed.",
" We appreciate the reviewer's valuable comments. Responses to your questions are provided below.\n\n> **Q1. (Limitation 1 / Question 1)** Lack of experimental justification with RNN-T Decoder: The original Conformer was optimized for the bi-encoder Transducer objective and not for the uni-encoder CTC objective. For example, the Conformer architecture must also operate well on tokens (in the Transducer-specific label encoder), on which it is harder to e.g., justify downsampling. This work shows that Squeezeformer-CTC outdoes Conformer-CTC; it does not show Squeezeformer outdoes Conformer generally, not even in the Conformer's original setting.\n\nWe did not perform experiments with RNN-T due to the limited compute resources and, therefore, our claims in the paper have been strictly about CTC models which are widely used in production today. We do not claim superior performance with RNN-T decoder, neither in the manuscript nor in rebuttal. We will clarify this in the final version of the paper. Having said this, CTC ASR models are indeed getting increased attention in industry due to the need to process audio signals in the data centers for applications such as offline ASR.\n\n> **Q2. (Weakness 1)** In 3.1.1, cosine similarity in adjacent frames is touted as redundancy which is implied as bad, even in the last layer (L164-166). But redundant embeddings are good for the CTC objective, where consecutive equal predictions can express staying in the same token. \n\nA certain amount of redundancy might be needed for clustering subsequent time frames into the same label and therefore improving the CTC decoding performance. At the same time, it might have a negative impact from the standpoint of efficiency and end-to-end latency. Reducing the FLOPs associated with the U-Net subsampling also allows one to use deeper or wider models which in turn may result in higher accuracy. Therefore, it is important to find the right tradeoff. Our results show the noticeable accuracy (WER) improvement under the same (or even smaller) computational cost through downsampling, which demonstrates the positive impact of increased efficiency from downsampling is greater than the potential negative impact that reduced redundancy and temporal information would have caused. Under a more drastic downsampling scheme than the proposed method, this trend may have been flipped and the accuracy (WER) could have been degraded due to the lack of enough redundancy and temporal information as like the reviewer’s claim. Investigating the impact of temporal redundancy on successful CTC decoding and searching for the optimal amount of redundancy would be an interesting future research direction.\n\n> **Q3. (Weakness 2)** Are multiple activation functions really a problem for efficient inference (L190)? There is still only one type of repeating block in the Conformer. Furthermore, Squeezeformer introduces new downsampling and upsampling layers, as well as variable memory consumption. \n\nThe overhead of supporting non-linear operations is actually pretty important for deployment on low-end edge devices which often do not contain dedicated vector processing units found in server-grade GPUs. One popular practice has been a lookup table that stores pre-computed outputs of non-linear functions where supporting multiple non-linear operations would require multiple lookup tables or advanced algorithms. Because non-linear operations in DL applications (as in Conformer/Squeezeformer’s case) are often on the critical path, supporting fast lookup tables often entails hardware cost (e.g., area and complexity), and reducing this hardware cost is an active research area [a, b, c]. \n\nHowever, new downsampling/upsampling layers such as those in Squeezeformer do not require extra logic or dedicated look-up tables in hardware. While it involves variable memory consumption that might require more involved hardware/mapping optimizations, its drastic reduction in number of FLOPs (2x) will eventually benefit the inference cost as can be seen in the end-to-end latency improvement of Table 3.\n\n\\\n\\\nReferences:\n\n[a] Geng et al. Hardware-aware Exponential Approximation for Deep Neural Networks. https://openreview.net/pdf?id=Sksl1mJPM\n\n[b] Geng et al. Hardware-aware Softmax Approximation for Deep Neural Networks. https://oar.a-star.edu.sg/storage/p/pv0k3qeq26/0421.pdf \n\n[c] Yu et al. NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference. https://arxiv.org/pdf/2112.02191.pdf \n",
" > **Q4. (Presentation Concern 2)** In Table 3's last two rows, it is misleading to selectively include one non-CTC baseline. The original Conformer paper includes Transformer numbers (from your [22]) that are notably stronger (2.2, 5.6, 2.6, 5.7) than what you list. In fact, the original Conformer(-Transducer)'s numbers are far better (test = (2.1, 4.3)) with only 118.8M params.\n\nThe Transformer that we used as our baseline is a CTC-based architecture (second to the last row in Table 3) as also mentioned in section 3.3.1 of [d]. We are also aware of the numbers reported in [e]; however, we did not compare those numbers with ours as [e] augmented external language model to decoding (section 4.3 and equation 15 in the paper) which in general results in a trivial WER improvement.\n\n> **Q5. (Questions 2)** What if the reduction in cosine similarities is not meaningful? What if U-Net outdoes downsampling for a simpler reason?\n\nOur observations and the results demonstrate that for successful decoding it is unnecessary to have a large amount of temporal redundancy as Conformer does, and by avoiding it we can achieve a better efficiency-accuracy trade-off. While it is not possible to give a definite answer due to the many moving parts involved in this problem, demystifying the impact of temporal redundancy on CTC decoding would be an interesting future research direction, as also mentioned in the answer to Q2.\n\n> **Q6. (Questions 3)** Why do you think similarity increased (vs. baseline) before the downsampling?\n\nThis observation would be worth investigating further. For now, our intuitive explanation is that the downsampling layer introduces an explicit decoupling of the roles of the bottom and top layers such that the bottom layers focus more on high frequency features while the top ones focus on low frequency features. In such a case, we expect that the bottom layers embed time frames based on their local dependency, which makes their embeddings more similar to their neighboring frames. Another explanation could be that the bottom layers learn to compensate for the reduced redundancies after downsampling. Such downsampling operations are also known to produce different attention similarity patterns in vision tasks such as in MViT-v1 (Figure A.6) [f]. \n\n> **Q7. (Questions 4)** If unifying activations, why to Swish instead of GLU?\n\nGLU requires more complex memory transfer operations than Swish activation which is the reason we chose Swish instead. Note that Swish can be performed elementwise, while GLU requires applying a non-linear activation on the second half of the signal and multiplying it with the first half. \n\n\\\n\\\nReferences:\n\n[d] Likhomanenko et al. Rethinking Evaluation in ASR: Are Our Models Robust Enough? https://www.isca-speech.org/archive/pdfs/interspeech_2021/likhomanenko21_interspeech.pdf ([28] in our paper)\n\n[e] Karita et al. A Comparative Study on Transformer vs RNN in Speech Applications https://arxiv.org/pdf/1909.06317.pdf ([22] in our paper)\n\n[f] Fan et al. Multiscale Vision Transformers. https://arxiv.org/pdf/2104.11227.pdf \n\n",
" We appreciate the reviewer's valuable comments. Responses to your questions are provided below.\n\n> **Q1. (Weakness 1 / Question 2)** My main criticism is that the baseline has much worse performance than it is reported in the original Conformer paper. As far as I can see the main difference between the 6.8% (test-other) reported here and 4.3% in the original paper is the decoder. (Add the previously reported conformer result into the table)\n\n\nWe acknowledge the reviewer’s point that there is a performance gap between the reported numbers and our reproduction for the Conformer baseline. The difference mainly arises from the difference in decoder: please note that we are comparing Conformer-CTC and not RNN-T decoder which is used in the original Conformer paper. RNN-T decoders are generally known to result in better (lower) WER results, which have been widely observed and studied in prior literature [a, b, c]. Furthermore, there is the known difficulty of reproducibility of Conformer results as well. We’ve indeed made a considerable effort in reproducing Conformers despairing any looming bugs that might affect our own results as well. Unfortunately, the authors have not open-sourced their implementation, and the difficulty of reproducing their results has been a known problem in many prior works due to the absence of publicly available training codes and recipes. For this reason, prior works have also reported and compared against their own Conformer results trained under fair training conditions [c, d, e]. We mention upfront the performance gap against the original Conformer in Section 4.2 of the revised paper.\n\nUnder these considerations, we believe the more reasonable baseline for us is the Conformer-CTC checkpoints from NVIDIA’s open-source library Nemo [f], whose training codes and recipes are public as well. Below is a new table with the Nemo results for Conformer-S, M, and L appended. We note that, compared to our own Conformer implementation, Nemo implementation has two major differences that enhance performance: (1) Nemo’s Conformer S and M configurations are larger than ours (18 layers, 176 hidden dim for S and 18 layers, 256 hidden dim for M) which result in a larger number of FLOPs as can be seen in the table; (2) Nemo has been trained for 1000 epochs whereas ours are trained for 500 epochs. \n|Model| test-clean (%)| test-other (%)|FLOPs (G)| #training epochs |\n| ------------------ | ---- | ---- | ----- | ----- |\n| Conformer-S (Nemo) | 3.4 | 8.8 | 39.6 | 1000 |\n| Squeezeformer-S | 3.08 | 7.47 | 26.3 | 500 |\n| Conformer-M | 3.20 | 7.90 | 71.7 | 500 |\n| Conformer-M (Nemo) | 3.0 | 7.3 | 78.2 | 1000 |\n| Squeezeformer-SM | 2.79 | 6.89 | 42.7 | 500 |\n| Squeezeformer-M | 2.56 | 6.50 | 72.0 | 500 |\n| Conformer-L | 2.80 | 6.55 | 280.6 | 500 |\n| Conformer-L (Nemo) | 2.7 | 6.1 | 280.6 | 1000 |\n| Squeezeformer-ML | 2.61 | 6.05 | 169.2 | 500 |\n| Squeezeformer-L | 2.47 | 5.97 | 277.9 | 500 |\n| Squeezeformer-L* | 2.44 | 5.65 | 277.9 | 1000* |\n\n\n\nCompared to the Nemo’s results, the general trend of Squeezeformer achieving lower WER than Conformer in similar FLOPs regime remains the same. Moreover, under the fair condition with the same number of training epochs (i.e., 1000), Squeezeformer shows even further improvement as can be seen in the last column in the table (marked *), which further emphasizes the strength of Squeezeformer over Conformer. Due to the page limit, we will include this new comparison to Table 3 in the final version of the paper.\n\n\\\n\\\nReferences:\n\n[a] Zhang et al. Benchmarking LF-MMI, CTC and RNN-T Criteria for Streaming ASR. https://arxiv.org/abs/2011.04785 \n\n[b] Majumdar et al. Citrinet: Closing the Gap between Non-Autoregressive and Autoregressive End-to-End Models for Automatic Speech Recognition https://arxiv.org/pdf/2104.01721.pdf \n\n[c] Berchi et al. Efficient Conformer: Progressive Downsampling and Grouped Attention for Automatic Speech Recognition. https://arxiv.org/pdf/2109.01163.pdf\n\n[d] Shim et al. Understanding the Role of Self Attention for Efficient Speech Recognition. https://openreview.net/pdf?id=AvcfxqRy4Y\n\n[e] Lohrenz et al. Relaxed Attention: A Simple Method to Boost Performance of End-to-End Automatic Speech Recognition, https://arxiv.org/pdf/2107.01275.pdf \n\n[f] NVIDIA Nemo. https://github.com/NVIDIA/NeMo\n\n",
" \n> **Q2. (Question 3)** Evaluating on another dataset would strengthen the paper\n\nWe thank the reviewer for the suggestion. To address this concern, we have conducted an additional set of experiments on transferring Squeezeformer trained on Librispeech to TIMIT with and without finetuning. In both cases, we used the same sentence piece tokenizers as Librispeech training. For finetuning, we used the same learning rate scheduling with peak learning rate ($lr_{peak}$) in {0.5, 1, 2, 5}e-4, 2 epochs of warmup ($T_0$) and 0 epoch for maintaining the peak learning rate ($T_{peak}$). The WER is measured on the test split. The results are as follows: \n\n| | without finetuning | with finetuning | Params (M) | FLOPs (G) |\n| ---------------- | ---------------- | ------------- | ---------- | --------- |\n| Conformer-S | 18.09 | 13.41 | 8.7 | 26.2 |\n| Squeezeformer-XS | 16.31 | 12.89 | 9.0 | 15.8 |\n| Conformer-M | 13.91 | 10.95 | 27.4 | 71.7 |\n| Squeezeformer-S | 13.78 | 11.26 | 18.6 | 26.3 |\n| Squeezeformer-SM | 13.65 | 10.50 | 28.2 | 42.7 |\n| Conformer-L | 13.41 | 10.03 | 121.5 | 280.6 |\n| Squeezeformer-M | 13.44 | 10.32 | 55.6 | 72.0 |\n| Squeezeformer-ML | 11.35 | 9.96 | 125.1 | 169.2 |\n| Squeezeformer-L | 12.92 | 9.76 | 236.3 | 277.9 |\n\nAs can be seen in the table, the general trend is similar to the Librispeech result in Table 3: under smaller or same FLOPs and parameter counts, Squeezeformer outperforms Conformer. This empirically shows the transferability of Squeezeformer to an unseen/different ASR dataset. We included this Table in the supplementary of the revised paper (please check Section A.4 and Table A.2).\n\n> **Q3. (Question 4)** It seems that the suggestion to remove the Macaron architecture contradicts the original study https://arxiv.org/pdf/2005.08100.pdf Table 3. Am I missing something or this paper proposes to undo this change in the original?\n\nIn the Conformer paper, the detailed model architectures for ablations are not clearly described. However, given that Table 3 of the Conformer paper shows the impact of removing individual components from Conformer towards the vanilla Transformer, we can assume that the 4th row in the table is comparing the performance of (1) Transformer with relative positional embedding vs. (2) Transformer with relative positional embedding and Macaron structure, both of which don’t contain convolution blocks (i.e., MF structure vs. FMF structure). This is different from how we disentangle the Macaron structure from the Conformer structure (i.e., FMCF structure vs. FCMF structure), and is not contradictory to our intuition that having an FFN layer after MHA and convolution layers benefit performance. Further, in our experience, the architectural components often have non-linear interactions such that the overall ablation path is non-conservative in the final performance. Ablating a component, say, B after ablating A might show no drop in performance while ablating B directly on the original architecture can cause significant degradation and vice-versa. ",
" We appreciate the reviewer's valuable comments. Responses to your questions are provided below.\n\n> **Q1. (Weakness 1)** Even though the results are impressive, there is not much originality in this paper as it builds upon extensively studying the existing Conformer architecture and combines standard temporal downsampling tricks from vision.\n\nWhile the individual methods might not be regarded as novel, it is the first work to extend and carefully adjust these techniques to the ASR domain, which combined together to result in a significant performance improvement (both in WER and computational efficiency) over the de-facto Conformer architecture, as well as improved latency.\n\n> **Q2. (Question 1)** I wonder if the authors studied the positional embeddings in Conformer and thought of any improvements in a positional encoding scheme for self-attention in the Squeezeformer?\n\nWhile relative positional embeddings in Conformer involve additional computation costs as compared to absolute positional embeddings, we empirically observed that replacing them with absolute positional embeddings resulted in noticeable performance degradation. Developing better positional embedding schemes has been an active research area (not just in ASR, but across domains) including designing new schemes [a], or new self-attention mechanisms which do not require positional embeddings at all [b]. Our scope for analyzing the positional embedding was limited to absolute/relative embeddings. However, we do expect that Squeezeformer would benefit the same as other models with new positional embeddings in further research.\n\n\\\n\\\nReferences:\n\n[a] Likhomanenko et al. CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings. https://ronan.collobert.com/pub/2021_cape_arxiv.pdf \n\n[b] Shim et al. Similarity and Content-based Phonetic Self Attention for Speech Recognition. https://arxiv.org/pdf/2203.10252.pdf",
" This paper proposes Squeezeformer, a novel hybrid attention-convolution architecture for ASR through a series of extensive architectural studies and improvements over the Conformer architecture. Note, Conformer has been the de-facto architecture for E2E speech processing tasks and Squeezeformer simplifies the shows impressive improvements on the Conformer-CTC architecture. The authors also have open sourced the code and trained model checkpoints which should be supremely helpful to the community in re-using and building on these models. Strengths:\n * The authors extensively study the architectural design of Conformer and simplifies a lot of the choices in Conformer building out a simpler, cleaner and possibly better architecture, Squeezeformer.\n* The paper introduces U-Net style temporal scaling architecture and are able to show strong results at various different model scales improving upon earlier Conformer benchmarks which have been SOTA for possibly all speech processing tasks.\n* Given that Conformer has been the de-facto architecture for speech over the last few years, the results jumps in performance by Squeezeformer are significant and the extensive set of ablations and experiments in this paper are very helpful in providing insights to the community. \n\nWeaknesses:\n* Even though the results are impressive, there is not much originality in this paper as it builds upon extensively studying the existing Conformer architecture and combines standard temporal downsampling tricks from vision. But still, the results shown are strong and the experiments are significant that would be of huge help to the community. * I enjoyed reading through the paper, I wonder if the authors studied the positional embeddings in Conformer and thought of any improvements in a positional encoding scheme for self-attention in the Squeezeformer? N/A",
" The paper improves upon a Conformer model for speech recognition. The paper finds experimentally that the subsequent activations are are highly correlated in the higher confomer blocks. Therefore, it proposes to subsample in the the temporal dimension. Then, the paper proposes to remove the Macaron block. Finally it proposes to optimize the micro-architecture of the conformer: to use the Swish activation instead of GLU, use the PostLN with an extra scaling, and use a separable convolution for the first layer.\n\nThe paper describes in detail the motivation for each improvement. Then, the experiments show that adding each change one-by-one gradually improves the performance of the proposed model.\n\nFinally, the paper reports the performance of the proposed model in comparison to prior publications. # Strengths\n\nThe paper is very well written and is a delight to read. I was able to follow the motivations for each step of the improvement and the proposed change.\n\nThat's being said, each proposal is sound and makes sense. The experimentation is rigorous and follows the best practices.\n\n# Weaknesses\n\nMy main criticism is that the baseline has much worse performance than it is reported in [0]. As far as I can see the main difference between the 6.8% (test-other) reported here and 4.3% in [0] is the decoder. While I understand, that the baseline source code is not available, the paper needs to be more upfront about the differences and better performance of [0]. Perhaps, it can be included in the table as \"closed source SOTA with RNN-T decoder\" or something like this.\n\nThere is a paper which predates Jasper as an end-to-end CNN architecture: [1].\n\n[0] https://arxiv.org/pdf/2005.08100.pdf\n[1] https://arxiv.org/pdf/1701.02720.pdf\n\n# Typos\n\npage 7, 219 p -> alpha\nPage 8, line 255 work -> works\npage 8, line 258 \"-\" -> \"--\" Suggestions\n- Please fix the typos\n- Add the previously reported conformer result into the table\n- Evaluating on another dataset would strengthen the paper\n- It seems that the suggestion to remove the Macaron architecture contradicts the original study https://arxiv.org/pdf/2005.08100.pdf Table 3. Am I missing something or this paper proposes to undo this change in the original? N/A",
" The paper considers the Conformer architecture popularly used in recent speech-input tasks. Under the CTC objective, they observe the following properties and motivate solutions to give Squeezeformer:\n- Temporal redundancy of features motivates a U-Net downsample/upsample structure\n- Adjacent PreLN/PostLNs motivate converting one to scale normalization\n- Adjacent convolution/MHSAs motivate intervening FFNs\n\nas well as other changes (unified activation fns., depthwise-separable convs.). They additively demonstrate each's improvements over Conformer-CTC and compare them under fixed GFLOPs or fixed parameter counts, and then at three overall model scales (vs. small, vs. medium, and vs. large models).\n\n**Post-rebuttal: I increase my soundness and presentation scores from 3/4 to 4/4, and my score from 6/10 to 8/10.** This paper is an exemplar of what proposals of new deep-learning architectures should look like. Thanks to the authors for their extensive work.\n\nWhile none of the individual methods are novel, their careful application atop Conformer-CTC is. The motivations are verbally clear, at least, and the CTC comparisons are fair (at fixed GFLOPs, fixed counts) as well as the ablations (e.g. vs postLN or preLN only). Re: significance, I'm convinced that Squeezeformer-CTC outperforms Conformer-CTC, and would expect this to extend to other _uni_-encoder speech applications given the margin of improvement (see Limitations). Re: clarity, the process of development and the individual methods are very clear.\n\nHowever, some changes are motivated by debatable, possibly post-hoc \"intuitions\":\n- In 3.1.1, cosine similarity in adjacent frames is touted as redundancy which is implied as bad, even in the last layer (L164-166). But redundant embeddings are good for the CTC objective, where consecutive equal predictions can express staying in the same token. \n- Are multiple activation functions really a problem for efficient inference (L190)? There is still only one type of repeating block in the Conformer. Furthermore, Squeezeformer introduces new downsampling and upsampling layers, as well as variable memory consumption.\n\nAlso, some presentation concerns:\n- I disagree with flipping the WER y-axis in Fig. 1. It is jarring as an ASR practitioner; also, it is reasonable to expect readers to intepret lower=better, visually (for some, the jarring reversed order of y-axis values may outweigh any gain in visual intuitiveness). Even worse, _the GFLOPs are superimposed atop this \"higher-is-better\" WER plot, but GFLOPs are a \"lower-is-better\" quantity!_\n- In Table 3's last two rows, it is misleading to selectively include one non-CTC baseline. The original Conformer paper includes Transformer numbers (from your [22]) that are notably stronger (2.2, 5.6, 2.6, 5.7) than what you list. In fact, the original Conformer(-Transducer)'s numbers are far better (test = (2.1, 4.3)) with only 118.8M params.\n\nMinor issues:\n- FLOPs should defined before use, especially as it may be confused with FLOPS.\n- A sentence or two about the scaling proposed in [9] in L241 would be helpful.\n- L142-150 should mention that downsampling by itself is similar to Efficient Conformer\n- L34: \"it's\" --> \"its\", \"lengths\" --> \"lengths.\"\n- L35: \"inputs as alow pointed out\"?\n- L51: \"doubles\" --> \"halves\" (to match *down*sampling; the sampling rate is lower when fixed time is covered by fewer samples)\n- L80, etc.: \"depthwise\", \"depth-wise\", \"depthWise\" all used\n- L116, etc.: \"Librispeech\", \"LibriSpeech\" both used\n- L219: \"preLN-only\" --> \"postLN-only\"\n- Table 3: \"an NVIDIA’s Tesla a100\" --> \"an NVIDIA Tesla A100\"\n- L422: \"NVDIA\" --> \"NVIDIA\" - Are Squeezeformer's gains over Conformer possibly from optimizing for a different objective (from Transducer to CTC)? This limitation should be early and upfront.\n\n- What if the reduction in cosine similarities is not meaningful, e.g., outputs give H__E_Y____ --> H_HEEY_Y_Y? What if U-Net outdoes downsampling for a simpler reason (the output token resolution argument?)\n\n- Why do you think similarity increased (vs. baseline) before the downsampling?\n\n- If unifying activations, why to Swish instead of GLU? (Also the point above re: multiple activations and efficiency)\n\nSome comments re: presentation concerns above would also be appreciated. The original Conformer was optimized for the bi-encoder Transducer objective and not for the uni-encoder CTC objective. For example, the Conformer architecture must also operate well on tokens (in the Transducer-specific label encoder), on which it is harder to e.g., justify downsampling. This work shows that Squeezeformer-CTC outdoes Conformer-CTC; it *does not show* Squeezeformer outdoes Conformer generally, not even in the Conformer's original setting.\n\n(That said, the uni-encoder Conformer has become more typical, so these insights likely apply to many upcoming models.)"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
"UQZEbgD8lQ_",
"pRWdbijega",
"pRWdbijega",
"NErDY0uGrY",
"FmnBs-qWysX",
"nips_2022_gE_vt-w4LhL",
"nxY2UXJXGnC",
"nxY2UXJXGnC",
"Af5A0CL5wPo",
"Af5A0CL5wPo",
"FpJmwrxLAdb",
"nips_2022_gE_vt-w4LhL",
"nips_2022_gE_vt-w4LhL",
"nips_2022_gE_vt-w4LhL"
] |
nips_2022_fpfDusqKZF | Neural Basis Models for Interpretability | Due to the widespread use of complex machine learning models in real-world applications, it is becoming critical to explain model predictions. However, these models are typically black-box deep neural networks, explained post-hoc via methods with known faithfulness limitations. Generalized Additive Models (GAMs) are an inherently interpretable class of models that address this limitation by learning a non-linear shape function for each feature separately, followed by a linear model on top. However, these models are typically difficult to train, require numerous parameters, and are difficult to scale.
We propose an entirely new subfamily of GAMs that utilizes basis decomposition of shape functions. A small number of basis functions are shared among all features, and are learned jointly for a given task, thus making our model scale much better to large-scale data with high-dimensional features, especially when features are sparse. We propose an architecture denoted as the Neural Basis Model (NBM) which uses a single neural network to learn these bases. On a variety of tabular and image datasets, we demonstrate that for interpretable machine learning, NBMs are the state-of-the-art in accuracy, model size, and, throughput and can easily model all higher-order feature interactions.
Source code is available at \href{https://github.com/facebookresearch/nbm-spam}{\ttfamily github.com/facebookresearch/nbm-spam}. | Accept | The paper proposes an approach, Neural Basis Model (NBM), that can be seen as a new subfamily of Generalized Additive Models for interpretability. The proposed model is compared to alternatives, showing competitive performance while being computationally more efficient. The authors successfully addressed questions raised by reviewers. As also noted by the authors, a major limitation of the paper remains to be the requirement of the input features being interpretable. This would limit the applicability and utility of the proposed model, limiting the significance of the contribution. | val | [
"sglLX1DpYjn",
"jLYYgW6MeD",
"oWNyu5RrGqy",
"hz2Lxvg6wIH",
"4VvDiLXCCXR",
"eKn-SCJVzT",
"zjS5knxuU8l",
"v6NniSz4o8K",
"EpfbwNuwRIU",
"YDkxWSBGXFa",
"nPpQWqST3eP",
"RNTBuHyYZO",
"CBh_j0TVmlE"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your time and input. \n\nWe will make sure to include following in the camera ready: results on all 16 datasets (9 new from rebuttal) with appropriate baselines and updated SOTA, additional visualizations (some are already added in the updated supplementary, see Appendix Section A.4 and Figure A.2), and additional discussion in response to all reviews.",
" Thanks for the detailed response, I would keep my original rating after reading other reviewers' reviews. \nPlease clearly expose the new table and experiments during revision.\nBTW, there is a typo in my reviews, it should be inpainting instead of \"pinpointing\", sorry about the misleading.",
" Thank you for the updated review and score! We have updated the supplementary material (see Appendix Section A.4 and Figure A.2) to add some GAM graphs on CUB-200 dataset for different bird species. We will add more visualizations for the camera ready.\n\nLikewise, we will include all the results presented during the rebuttal period (all 16 datasets) in the camera ready, we would like to run all baselines (Linear, MLP, etc.) and ensure we meet the space limit for which we need a bit more time, I hope the reviewer can appreciate our effort. \n\nFinally, additional discussion in response to all reviews will be incorporated in the final revision of the paper, as well.",
" I appreciate the effort authors bring to the revision especially given the short period of rebuttals. The datasets are much more comprehensive which relieves the dataset picking concern. The performance from NAM and NBM seems most in Click but relatively small in others.\n\nI think it will be very valuable if you can show more GAM graphs (also requested from below reviewer uzMk) especially on the image datasets to see if they are indeed interpretable qualitatively, or maybe not interpretable in some cases. I believe there is no one applying concepts bottleneck + GAM to image spaces and I would love to see if this approach makes sense.\n\nOverall, I still think the proposed change is still somewhat incremental, and the performance gain does not seem to be much except for the imaging datasets (iNat Birds and Common Objects) and CLICK.\nBut I do understand the point that improves from the original NAM which makes it faster and more scalable. And the experiments in image datasets are interesting but I believe it needs more analysis.\n\nOne thing that may be worth noting is that Openreview allows modifying the paper, so it will be great to see if those promised changes are incorporated in the revision. But I understand there are space and time issues so it's not necessary at this point.",
" - Thank you for your comments, and thank you for stating that “the proposed model achieved very good experimental results”, and that it “is easy to follow”.\n- “It's more natural to introduce GAM, NAM, then NBM, which helps readers understand the contribution of this paper”\n - Thank you very much for this suggestion, we will revise the manuscript to incorporate it.\n- “Do you do any data normalization before training NBM?”\n - In Section 4.1 “Datasets” we define the normalization performed before training NBMs. For tabular datasets we perform one-hot encoding for categorical features, and min-max scaling of features to [0,1] range for continuous features. For image datasets, interpretable features are extracted as output probability score (already in [0,1] range) of trained part-attribute models. Same normalization is performed for NAM. \n - Note that, it is common practice for any approach to normalize structured data [2, 11, 30, 31]. When the normalization is not performed, there is a slight drop in performance for NBM as well as all other baselines. We observe that NBM can handle different ranges of input features by learning particular bases for different inputs.\n - Finally, in the response to Reviewer 9iGQ we add 9 new datasets, see Table R.1, and follow their instructions for normalization: ordinal encoding for categorical features and quantile transform to Gaussian distribution. With very little hyperparameter tuning, NBMs achieve SOTA results as well.\n - We will expand on this in our final revision.\n- “Why does NBM achieve better experimental results than NAM?”\n - Indeed, the Reviewer is right that the representative power of the model class of NAM is larger than NBM, and yes, performance of NBM is better due to it being regularized in the functional space of NAM MLPs. In the Appendix Section A.4, we prove this relationship more precisely in the case when, for any GAM, the shape functions have small norm in a reproducing kernel Hilbert space (RKHS) (Theorem 1 in Appendix). We demonstrate under reasonable regularity assumptions that, the reduction in model capacity, by using a basis model instead of a full GAM, only causes a marginal increase in generalization error at a substantial decrease in model complexity. Moreover, the error term decreases exponentially as the number of bases increases. To the best of our knowledge, this analysis is the first of its kind for interpretable models. This suggests that the basis models can efficiently approximate full GAMs without many basis terms. \n - Intuitively, the regularization that NBM introduces can be thought of as a functional analog of the regularization that decompositions such as SVD provide. We use only a small set of basis functions to span the space of all MLPs. Indeed, to get NAM to achieve similar performance as NBM, one can do a functional low-rank approximation after the NAM model is trained to regularize it effectively. NBM, however, directly learns this low-rank version during training itself by sharing bases.\n - Indeed, as Reviewer points out, we often observe that NAM and NBM achieve similar training accuracy / loss, but NBM results in better test accuracy.\n- “What kind of high dimension data can/can't be handled by NBM?”\n - We do recognize that our approach, though highly scalable, has limitations w.r.t. number of input features. Beyond 10K dense features, or 1M sparse features, we would need to apply some form of feature selection [11, 31, 47] to scale further. Scalability issue is even more pronounced when modeling pairwise interactions in NB$^2$M. However, NBMs can still handle an order of magnitude more than what can be handled by NAM or other GAM approaches that do not perform feature selection. We are exploring a direction where we model higher order interactions via learnable polynomials, which scale significantly better. However, this is beyond the scope of this work, and left for future work.\n",
" - Thank you for your comments, and for stating that “idea of decomposing each feature’s shape function into a small set of basis functions seems novel and efficient”.\n- “Analysis of the shape of the learned basis and a visualization of how these basis represent the inputs”\n - The basis functions are not interpretable themselves, their weighted combination for a given feature is interpretable, as depicted in Figure 3.\n - We plotted and analyzed the shapes of basis functions, and we observed that the model learns bases of varying frequencies. We will add this analysis in the appendix of the final revision of the paper.\n- “How does NBM perform compared with recent implicit functions”\n - We added 9 new datasets and compared against the recent state-of-the-art GAM approach NODE-GAM [11]. You can see the comparison and detailed discussion in our response to Reviewer 9iGQ (see Table R.1 and related discussion).\n- “The idea of basis learning is also related to the radial basis functions”\n - The Reviewer is right that basis learning is related to radial basis functions. Specifically, in the case when shape functions have small norm in a reproducing kernel Hilbert space (RKHS) spanned by an RBF kernel, basis methods correspond to kernel approximation via Mercer’s theorem. We demonstrate in the Appendix A.4 a precise tradeoff (Theorem 1) between the number of basis functions used and generalization error in an RKHS. We will move this discussion to the main paper.\n- “Have you try the proposed method on the ill-posed tasks? such as image pinpointing and radiance field reconstruction?”\n - We have explored interpretability in image classification and detection (localization) domains. These tasks required significant overhead in proper setup and evaluation in the interpretability area. Thus, we have not explored additional image tasks. However, we appreciate your suggestion, and we will be sure to explore image pinpointing and radiance field reconstruction as future work. If you have suggestions on suitable related works or benchmarks, we would love to add them to our codebase.\n- “Is there any similarity between the learned basis and the basis extracted from PCA-like approaches?”\n - Yes! The Reviewer is correct - Intuitively, the regularization that NBM introduces can be thought of as a functional analog of the regularization that decompositions such as SVD provide. Instead of learning a basis decomposition of a subspace of $\\mathbb R^d$, we optimize via neural networks to select a data-dependent basis of the $L^2$ space of functions ($L^1$ in the case of $\\ell_1$-regularized NBM). We will add this point to the final version.\n- “Any further advantages of interpretable modeling apart from the visualization?”\n - Concept bottleneck models [23], which we demonstrate NBMs improve, are shown to be useful for correction and intervention.\n - Being able to detect the impact of the bias in the data on the model, and then to repair the model, is critical if we are going to deploy machine learning in applications that affect people’s health, welfare, and social opportunities [5, 10, 44, 45]. This requires models that are interpretable.\n- “The paper lacks a limitations analysis.”\n - We do recognize that our approach, though highly scalable, has limitations w.r.t. number of input features. Beyond 10K dense features, or 1M sparse features, we would need to apply some form of feature selection [11, 31, 47] to scale further. Scalability issue is even more pronounced when modeling pairwise interactions in NB$^2$M. However, NBMs can still handle an order of magnitude more than what can be handled by NAM or other GAM approaches that do not perform feature selection. We are exploring a direction where we model higher order interactions via learnable polynomials, which scale significantly better. However, this is beyond the scope of this work, and left for future work.\n - As stated in the conclusions, for the computer vision domain, we assume an intermediate, interpretable concept layer on which GAMs can be applied. We cannot apply NBMs directly on pixel space, while maintaining model interpretability, but we are working on projects that resolve this limitation.\n - We will add this discussion in the final version of the paper.\n",
" - “Low originality”:\n - To the best of our knowledge, sharing basis functions is a novel concept in the context of GAMs, or interpretability in general. We propose a novel sub-family of GAMs using shared bases, that can be learned in an arbitrary fashion (splines, boosted trees, etc), and an approach to learn them via DNNs. We also propose an efficient first-of-its-kind extension to sparse datasets, where other GAMs do not scale without any feature selection.\n - We extensively evaluate on regression and binary classification, as well as on multi-class tabular and image datasets, which have been underexplored in GAM papers: EBM [30, 31], NAM [2], NODE-GAM [11].\n - In addition to the algorithmic and experimental contributions, we provide novel learning-theoretic results on functional basis approximation that are presented in the Appendix A.4. We demonstrate a precise guarantee on the generalization error (Theorem 1) that applies to any basis decomposition of GAMs where the shape functions lie in a reproducing kernel Hilbert space (RKHS). To the best of our knowledge, such an analysis is not present in the existing literature on interpretable machine learning.\n- “The hard sharing in NBM can incur negative transfer that lowers the performance”\n - We haven’t observed NBMs obtaining lower accuracy compared to the most related approach NAM in 7 datasets presented in the paper, and additional 9 added in the rebuttal, see Table R.1.\n - In addition, in Appendix A.4, we show that we only require $B = O(\\log D)$ for a competitive performance. Moreover, using a shared basis acts as a regularizer that makes training more stable. As is with any modeling inductive bias, one must analyze the data prior to learning a model to understand the best hyperparameter setting.\n - If the reviewer has a precise theoretical or experimental setting in mind, it would be great to learn a concrete case of “negative transfer”. Across our experimental benchmark (which now contains 16 datasets across tabular tasks as well as structured computer vision problems via concept bottlenecks), we have yet to see any evidence for negative transfer. Moreover, our theoretical results also suggest that with sufficient bases, “negative transfer” is unlikely to occur. \n- “The performance benefit does not seem to be big.”\n - We would like to respectfully disagree: the performance benefit of using NBMs instead of NAMs (the most related model) is four-fold:\n - Up to 5% relative improvement in prediction accuracy\n - Up to 50x reduction in parameters\n - Up to 7x better throughput\n - For large datasets, e.g. Newsgroups and Common Objects (with pairwise interactions), NAM does not scale at all. NBM scales effortlessly.\n- “Doesn't the NB2M still have the quadratic growth of the parameters? In Eq. (5), the parameter $b_{i,j,k}$ still grows quadratic.”\n - The reviewer is indeed correct that the $b_{i,j,k}$ grows quadratically, but that parameter is negligible compared to the size of the basis model, i.e., B << N where B is number of bases, and N is #params in the basis model. Entire model of size M in NA$^2$M grows quadratically with feature dimension, while only our smallest part of the model grows quadratically. Nevertheless, we will correct the statement.\n - To further emphasize the curse of dimensionality in NA$^2$M vs NB$^2$M, we perform similar analysis as in Eq (6):\n - |NA$^2$M| / |NB$^2$M| = 6402 / (125640/(D(D-1)) + 101)\n - This value is 1.0 for D=5 (it was 1.0 for D=10 in NAM vs NBM), and it is 4.28 for D=10, 56.31 for D=100, 63.39 for D=1000, etc.\n - I.e., this ratio is much more pronounced when modeling feature interactions.\n - We will add this analysis and respective plot in the final revision.\n- “The feature selection method in NodeGAM by using attention and back-propogation is not complex”\n - Thanks for the remark, we will update that statement in our final revision. Our statement in L76-81 does not target NODE-GAM work directly, but aims at motivating our choice of leaving the feature selection approaches out of scope of our work. We will remove “complex” from “we do not apply complex feature selection algorithms.”\n - We want to emphasize once more, that any of the mentioned feature selection algorithms is complementary and can be applied to NBMs.\n- “It would be nice to show if NBM is better than NodeGAM”\n - We compare against NODE-GAM in Table R.1. NODE-GAM results are reproduced from [11]. Identical train/val/test splits are used and same feature normalization is performed in our codebase, to respect fair comparison. Without any feature selection, NBM is comparable to NODE-GAM, and it outperforms it on 7 out of 9 datasets, admittedly with a small margin. \n - We will add these results in our final revision, together with NODE-GAM implemented and evaluated on our codebase. Finally, we will also add NB$^2$M and NODE-GA$^2$M comparison, which was omitted here due to a limited timeline for rebuttal.\n",
" We would like to thank the reviewer for their remarks. Please note that to address your concern of “poor dataset picking”, we have evaluated NBM against NAM [2] and NODE-GAM [11] on **9 additional datasets** (the 6 largest from the NODE-GAM paper, and 3 missing from the NAM paper), as summarized in Table R.1. NODE-GAM results are reproduced from [11], we use train/val/test from NODE-GAM [11], and perform the same feature normalization: ordinal encoding for categorical features and quantile transform to Gaussian. Even with minimal hyperparameter tuning and no additional model selection (architecture tuning, stochastic weight averaging, etc.) NBMs outperform NODE-GAM on 7 out of 9 datasets (albeit marginally), and outperform NAM on all datasets. As the reviewer themselves noticed, the idea of NBM is perpendicular to that of NODE-GAM and it is possible that bigger gains can be achieved by combining them. We will also revise the paper to include this evaluation. We hope this addresses your concern of dataset selection. Please find more details on this evaluation and answers to all remaining comments below.\n\n### Table R.1\n| Method | MIMIC-II (AUC) | Credit (AUC) | COMPAS (AUC) | Click (ERR) | Epsilon (ERR) | Higgs (ERR) | Microsoft (MSE) | Yahoo (MSE) | Year (MSE) |\n| -------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ---------------- |\n| NAM | 0.8539 $\\\\pm$0.0004 | 0.9766 $\\\\pm$0.0027 | 0.7368 $\\\\pm$0.0002 | 0.3447 $\\\\pm$0.0005 | 0.1079 $\\\\pm$0.0002 | 0.2972 $\\\\pm$0.0001 | 0.5824 $\\\\pm$0.0002 | 0.6093 $\\\\pm$0.0003 | 85.25 $\\\\pm$0.01 |\n| NODE-GAM | 0.832 $\\\\pm$0.011 | 0.981 $\\\\pm$0.011 | **0.742** $\\\\pm$0.009 | 0.3342 $\\\\pm$0.0001 | 0.1040 $\\\\pm$0.0003 | 0.2970 $\\\\pm$0.0001 | 0.5821 $\\\\pm$0.0004 | 0.6101 $\\\\pm$0.0006 | **85.09** $\\\\pm$0.01 |\n| NBM | **0.8549** $\\\\pm$0.0004 | **0.9829** $\\\\pm$0.0014 | 0.7396 $\\\\pm$0.0002 | **0.3316** $\\\\pm$0.0002 | **0.1038** $\\\\pm$0.0002 | **0.2969** $\\\\pm$0.0001 | **0.5817** $\\\\pm$0.0001 | **0.6084** $\\\\pm$0.0001 | 85.10 $\\\\pm$0.01 |\n\n- “Dataset picking”\n - We would respectfully argue the opposite. We have made an effort to select datasets where the difference in the performance between Linear models and full-complexity models (MLP and XGBoost) is the largest, e.g., datasets for which non-linear shape functions and higher-order interactions play a strong role in the optimal solution, and CoverType is a great example of such a dataset. Moreover, we wanted to push our method to the limit, so we evaluated on a dataset with: (i) many input features - Newsgroups has 150k; (ii) many datapoints - Common Objects has 2.5M; (iii) many classes – iNaturalist Birds has 1.5K. Finally, multi-class problems, especially in CV, have been notoriously underexplored in interpretable ML. Nevertheless, the evaluation on 9 extra datasets above should address any concern.\n- “Why are there several datasets in NAM not compared?”\n - MIMIC-II requires training in order to download and use the dataset, which was not possible before the deadline. The Credit and COMPAS datasets do not have sufficient complexity, i.e., Linear and Non-Linear models give comparable AUC and hence performance differences are often within standard deviation. Nevertheless, we have since added MIMIC-II, Credit, and COMPAS in Table R.1.\n- “Why do you mostly include multi-class datasets while NAM uses binary ones?”\n - The NAM paper evaluates regression and binary classification, we evaluate on regression, binary classification, and multi-class classification. We noticed that the multi-class problem is more challenging, hence we see this as a merit of our work, not a drawback. Many binary datasets are added in Table R.1.\n- “Multiple high-dimensional tabular datasets used in the NodeGAM”: \n - We added the 6 large-scale datasets from NODE-GAM [11] in Table R.1.\n- “No one would want to visualize the GAM plot of a single pixel”: \n - We would like to point out that the reviewer has misunderstood our image datasets evaluation. As mentioned explicitly in Section 4.1 (under “Image Datasets”), we perform concept-bottleneck-style [23] preprocessing, where each image is represented by interpretable features, eg, a “bird” image can be represented with “striped wings”, “needle-shaped beak”, “long legs”, etc that are predicted from the image using a CNN. Then GAMs are fitted on top of the predicted interpretable features, and shape functions can be visualized accordingly, thus there would be NO plots of GAM on a single pixel. This is an established approach in the literature to create interpretable computer vision models, known as the Concept Bottleneck. We encourage the reviewer to consult the seminal work in [23]. \n\nPlease find response to your other concerns in the subsequent comment below.",
" - Thank you for your comments, and for recognizing our “simple yet effective design” and that our paper is “well- written and organized”.\n- “Sparse architecture” – We evaluate NBMs on two sparse datasets, Newsgroups with sparsity 99.9% (only around 150 words from a vocabulary of 150k words appears per a given article, on average) and Common Objects with sparsity 97% (only around 76 part-attribute compositions from a vocabulary of 2618 compositions are active for a given object, on average). Thank you for your suggestion, we will add the sparseness values in Table 1.\n",
" This paper proposes a new GAM-like model, denoted as Neural Basis Model (NBM) to analyze the learned feature in machine learning models. Comparing with the traditional GAM, NBM learns a set of shape functions for each feature using a single neural network. Comparing with NAM, NBM requires less parameters when the dimensionality is greater than ten. The experiment shows NBM can achieve similar or slightly higher results to NAM on multiple datasets using less parameters. The visualized shape functions show that NBM is more stable than NAM, a lower standard deviation is achieved on 100 runs. The number of parameters of NBM is less than NAM when the dimensionality is high, the trick behind is the convolution-like shared shape function (with a fixed small number of B parameters, Eq.(6)), and this NBM can be extend to the bi-variate situation based on Eq(5). This simple yet effective design outperforms the previous state-of-the-art NAM. Generally, this draft is well- written and organized, the only question I have is about the \"sparse architecture\"(line 120). It would be even better to add one more column about the average sparseness of those tabular datasets to Table 1, which could be another factor behind the acceleration shown in Table 2. yes",
" # After Rebuttal\n\nI appreciate the authors including multiple datasets in rebuttal, but the performance improvement is still not as big. \nThe idea is still somewhat incremental in my idea, but the evaluation seems complete and the speed-up looks good compared to NAM. I believe this paper will have more impact if it releases a good codebase, and also shows more GAM graphs in the image datasets. \n\n# Original Review\n\nThis paper proposes to improve the NAM model by learning a shared basis function for each feature function i.e. for each MLP of each feature, the first few layers are the same across features. The proposed method, called NBM, allows a strong reduction of the # of parameters without sacrificing the accuracy. It also has higher speed and scales to higher number of features. It also models the interaction term, called NB2M, by modeling each pairwise interaction term through another MLP. The performance of NBM is slightly better than NAM in several tabular and image datasets although consistently. # Strengths\n- Clear writing.\n\n# Weaknesses\n- The datasets, IMO, are poorly chosen in this paper. Several image datasets are used, but no one would use GAM on pixels to claim any interpretability. Also, several original datasets in NAM are not compared. In contrast, some rare datasets like CoverType are used. It makes me wonder if there are dataset picking and the reported performance is not representative.\n\n- Low originality: I believe the idea is not new as sharing basis functions is common, e.g. CNN v.s. MLP and multi-task learning. The basis sharing might work better but the hard sharing in NBM can incur negative transfer that lowers the performance, especially in tabular datasets when the features are very heterogenous. Therefore, I think it's more important the author shows more comprehensive evidences.\n\n- Even with the selected datasets, the performance benefit does not seems to be big. 1. **Dataset picking**: Why are there several datasets in NAM not compared? Also, image datasets are a very poor use of GAM - no one would want to visualize the GAM plot of a single pixel. I get that the shared basis functions of NBM should work better with images since each feature is homogenous, but again no one would use GAM on images. If the idea is to test NB2M in high-dimensional features, there are multiple high-dimensional tabular datasets used in the NodeGAM paper that can be compared to. Also, why do you mostly include multi-class datasets while NAM uses binary ones? \n\n2. Doesn't the NB2M still have the quadratic growth of the parameters? In Eq. (5), the parameter $b_{i,j,k}$ still grows quadratic.\n\n3. IMHO, the feature selection method in NodeGAM by using attention and back-propogation is not complex, just a type of regular deep learning architecture.\n\n4. It would be nice to show if NBM is better than NodeGAM since it's also a neural GAM and should have fewer #params and faster throughput than NAM already. It also supports multi-class classification unlike EB2M. But I understand it can be perpendicular to the paper's main point to improve from NAM.\n\n\n[1] Chang, Chun-Hao, Rich Caruana, and Anna Goldenberg. \"Node-gam: Neural generalized additive model for interpretable deep learning.\" arXiv preprint arXiv:2106.01613 (2021). One limitation I can think of is that the hard sharing could lead to negative transfer among feature networks and deteriorates the performance. ",
" The paper proposes a Neural Basis Model (NBM) that utilizes basis decomposition of shape function for regression, binary classification and multi-class classification tasks in the context of achieving state-of-the-art accuracy, model size, and throughput. The central idea of the proposed approach is to dissociate the input feature and individually feed it to a set of shared bases that allows for efficient in inference and compact model size.\n What's good:\n1) The paper organization and presentation are clear mostly.\n2) The idea of decomposing each feature’s shape function into a small set of basis functions seems novel and efficient.\n\nTo be improved:\n1) Seems the main motivation of the paper is interpretability (as addressed in the title). If so, I would like to see a detailed analysis of the shape of the learned basis and a visualization of how these basis represent the inputs. \n2) The comparison approaches are related old I think, how does NBM perform compared with recent implicit functions?\n3) The idea of basis learning is also related to the radial basis functions, I would recommend also including some related discussion. \n1) Have you try the proposed method on the ill-posed tasks? such as image pinpointing and radiance field reconstruction? How does it perform?\n2) Is there any similarity between the learned basis and the basis extracted from PCA-like approaches?\n3) Any further advantages of interpretable modeling apart from the visualization? The paper lacks a limitations analysis.",
" This paper proposes a new transparent model called NBM. NBM improves upon NAM, which learns a NN (neural network) for each feature and the final output is a learned weighted-sum of the outputs of the neural networks. Instead of training independent NNs for each feature like NAM, NBM trains a set of NNs as basis and used them across all features. \nNBM improves the scalability issue of NAM by reducing the number of independent NNs. NBM is more stable than NAM because of the use of shared NNs. \nExperimental results show NBM and its variant NB^2M can get better results than NAM and NA^2M. Strengths:\n- The proposed model achieved very good experimental results.\n- The proposed model is easy to follow.\n\nWeeknesses\n- I had to switch back and forth multiple times while reading the paper. For example, the NAM model, an extension of GAM and the predecessor of the proposed NBM, is not mentioned in Section 3.1 Background but in Section 3.3 Dicussion after NBM is introduced. I feel it's more natural to introduce GAM, NAM, then NBM, which helps readers understand the contribution of this paper.\n- The discussion regarding the limitation of NBM is quite limited and can be improved. See my questions below. - Do you do any data normalization before training NBM? I didn't remember any related discussions in the paper or supplimentary PDF. If no, when the features' magnitudes vary, does NBM essentially work the same way as NAM since each NN is supposed to work for a particular range of inputs? I believe it's worth to discuss this issue a little bit more.\n- Why does NBM achieve better experimental results than NAM? NAM seems to be more powerful in its representative power. Is it the case where NAM achieves better training accurracy/loss but NBM gets better validation/test results? If that's case, does the shared NN work as some implicit regularizations? Will there be some regularizations that make NAM achieve similar results as NBM? - It would be great if the authors can discuss more about what kind of high dimension data can/can't be handled by NBM."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
3
] | [
"jLYYgW6MeD",
"eKn-SCJVzT",
"hz2Lxvg6wIH",
"zjS5knxuU8l",
"CBh_j0TVmlE",
"RNTBuHyYZO",
"v6NniSz4o8K",
"nPpQWqST3eP",
"YDkxWSBGXFa",
"nips_2022_fpfDusqKZF",
"nips_2022_fpfDusqKZF",
"nips_2022_fpfDusqKZF",
"nips_2022_fpfDusqKZF"
] |
nips_2022_TwuColwZAVj | Scalable Interpretability via Polynomials | Generalized Additive Models (GAMs) have quickly become the leading choice for interpretable machine learning. However, unlike uninterpretable methods such as DNNs, they lack expressive power and easy scalability, and are hence not a feasible alternative for real-world tasks. We present a new class of GAMs that use tensor rank decompositions of polynomials to learn powerful, {\em inherently-interpretable} models. Our approach, titled Scalable Polynomial Additive Models (SPAM) is effortlessly scalable and models {\em all} higher-order feature interactions without a combinatorial parameter explosion. SPAM outperforms all current interpretable approaches, and matches DNN/XGBoost performance on a series of real-world benchmarks with up to hundreds of thousands of features. We demonstrate by human subject evaluations that SPAMs are demonstrably more interpretable in practice, and are hence an effortless replacement for DNNs for creating interpretable and high-performance systems suitable for large-scale machine learning.
Source code is available at \href{https://github.com/facebookresearch/nbm-spam}{\ttfamily github.com/facebookresearch/nbm-spam}. | Accept | The paper notes that polynomial functions are inherently interpretable models, and takes algorithmic advantage of the connection between polynomials and tensors by learning the coefficients of the polynomials using a low-rank tensor factorization. The resulting algorithm is shown to outperform prior SOTA interpretable models and to match blackbox model performance on several data sets. | train | [
"_qnXKCvLof9",
"it65fSGlEyc",
"k-XMKRS9aBB",
"XSG3tN0OuLl",
"19uV1HlJoVd",
"2mFqRLodPhT",
"ta-_KtdABil",
"9sFHjci-PcJ",
"67_Nt8JxPUs",
"E7d2EfVz4gx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The author-response phase closes today. Please acknowledge the author rebuttal and state if your position has changed. Thanks!",
" The author-rebuttal phase closes today. Please acknowledge the author rebuttal and state if your position has changed. Thanks!",
" I do not have futher concerns if the above will be addressed in the future version.",
" We would like to thank the reviewer for the positive feedback and the great suggestions. We answer the questions below:\n\n- *It would be better to provide the details in the human subject evaluation. For example, the authors could provide the explanation provided by the models in the bird prediction task. Then we may have a better understanding on how the model is interpreted.* \n - Thanks for the suggestion! We have included some sample explanations on the CUB dataset outputted by MLP+LIME and SPAM Order 2 models in the Appendix Section D along with more details about the human subject evaluation. We will include a short discussion in the main paper as well.\n\n- *“In addition, what are the reasons to choose the four datasets presented in the paper? I am concerned that the results are cherry picking so that the proposed method cannot work well in prediction.”*\n - Thank you for raising this concern! We would first like to clarify that we evaluate on 7 datasets and not 4. Next, to address cherry-picking: on fact, during our dataset selection procedure, we gave great care to have datasets that satisfied:\n - (a) gap between linear and non-linear classifiers: several interpretability benchmark datasets have small performance gaps between linear and non-linear classifiers, where it is obvious that for interpretability one should prefer a linear model. Therefore, we wanted to select some datasets where there was in fact a tradeoff between interpretability and performance so that we could highlight the improvements of SPAM-like models.\n - (b) scale: we wanted to explore all possible dataset scale - large-scale in number of features (e.g., Newsgroups, 150K features), large-scale in number of samples (e.g., Common Objects, 2.6M samples), as well as small scale datasets.\n - However, to assuage any concerns of dataset selection, we have run additional experiments on 9 extra tabular datasets that are commonly present in the literature. The results of this are summarized in Table 1 in the common response above. We find that on several datasets, order 2 interactions match XGBoost performance. We will include a full version of this comparison with order 3 interactions as well in the main paper. We will extend our experimental section to include all these new datasets, making our total benchmark 16 datasets, the largest evaluation of interpretable models to the best of our knowledge.\n\n- *“But it seems a bit tricky since the DNN is just a simple MLP. Outperforming an MLP may not be a big advantage.”*\n - The reviewer is indeed correct that for some datasets MLPs might not be enough. We do, in fact, compare with XGBoost as well for all methods that can support it. We agree that it is indeed possible to extract some benefits on top of XGBoost/MLPs using some specific architectures, however, we believe that is tangential to the objective of this paper, which is to highlight a significant improvement in fully-transparent models when interpretability is a requirement. As our human subject evaluations suggest, current post-hoc explanations are not reliable for very large-scale datasets, and hence the precise model being used is extremely dependent on the application and requirement of interpretability. Nevertheless, this is a valuable point that the reviewer has highlighted and we will be sure to address this in the main paper as well.\n\n- *“What if we continue to increase the length of explanation? Is it the 7 global optimal in Figure 2C?”*\n - Thanks for this point! Indeed, a short explanation appears to be more faithful and useful from a human subject perspective. We did increase the length of the explanation to 15 and found that the performance decreased further. We believe this is due to excessive information that can be confusing for users. We will add this point in the main paper.\n",
" We would like to thank the reviewer for their assessment of the paper and their suggestions. Please find responses to your questions below:\n - *Results for $NA^2M$*: \n - Thank you for pointing out that issue! Please see Table 2 in the common response above for the comparison of $NA^2M$ with SPAM and NeuralSPAM (Order 2). We can see that while SPAM by itself doesn’t always beat $NA^2M$ (due to the lack of nonlinearities), NeuralSPAM outperforms $NA^2M$ on all but 1 dataset. On CoverType, we believe that NeuralSPAM has too few parameters compared to $NA^2M$ (which has one MLP for each possible combination) and hence underfits severely. When we increase the number of subnets to 8 for NeuralSPAM we obtain an accuracy of 0.9022 on Order 2, whereas it is impossible to train with 8 subnets for $NA^2M$ for order 2 due to memory usage. Moreover, adding subnets does not improve $NA^2M$ performance significantly.\n\n- *Empirical Results for Shared Bases:* \n - Please see Table 3 in the common response above for the change for multi-class problems. We observe a consistent improvement from sharing bases on all datasets due to the reduction in the number of parameters. We will include this in the final paper, thank you for the suggestion!\n\n- *Shared bases across different degrees*: \n - That is an excellent suggestion! While we haven’t explored that in our current set of experiments, we will update with a comparison with sharing bases across degrees in the final version.\n\n- *Scalability of SPAM*: \n - Thanks for this point! In practice, since the SPAM dimensionality increases only logarithmically with the dimensionality of the data, Linear-SPAM can be scaled without issue to datasets with 150K features (as in Newsgroups), in contrast to other interpretable approaches like NAM or EBMs. \n - We agree that Neural-SPAM will have less scalability due to the non-linear mapping. However, if we contrast that growth with previous work such as NAM, our growth is still $\\mathcal O(dk)$ for a $d$-dimensional input with degree $k$ interactions, whereas NAM, for instance, scales as $\\mathcal O(d^k)$ for degree $k$ interactions. \n - Regarding images and structured data: We expect that SPAM will be applied on interpretable concepts for images/text, e.g., as done via concept bottleneck models. One can, however, apply SPAM to the output of a black-box model as well without issue, e.g., ResNet-50 output features, to improve performance.\n\n- *Running time of SPAM*: \n - Thanks for this suggestion! Please see Table 4 in the common response above for runtime comparison of LinearSPAM with NAM on some datasets. We will include this discussion in the main paper as well. The summary of this comparison is that Linear versions of SPAM perform significantly faster than NAM and MLPs, even at order 3. For NeuralSPAM, we observe that it is faster than NAMs (order 2), however the additional parameters indeed decrease the throughput.",
" We would like to sincerely thank the reviewer for their appraisal of our paper. Please find responses to your concerns below:\n\n- *I would avoid adding \"fully-interpretable models\" in the abstract. Is SPAM really \"fully-interpretable\"? I think \"inherently interpretable\" could be better.*\n\n - Thank you for the remark! We agree that inherently interpretable is a better characterization of our approach, as fully-interpretable is a stricter definition that polynomials will not obey in certain cases (e.g., large degree of polynomial).\n\n- *I think adding some more analysis about the learned SPAM models (i.e., which features or feature interactions are important) could help readers understanding what has been learned inside models for some of the tasks.*\n\n - Thank you for the excellent suggestion! In the Appendix Section D we present examples of explanations that we used for the CUB-200 dataset in our human subject evaluations. We will provide, in the main paper, some examples of explanations for different datasets as well.\n",
" We would like to sincerely thank the reviewers for their positive feedback and highly constructive feedback. We will be sure to incorporate it into our final version. \n\n## Comparison on additional benchmarks\n\nIn addition to the 7 datasets we considered in the main paper, we evaluate on 9 additional benchmark datasets common in the tabular learning literature. We consider the MIMIC2, Credit and COMPAS datasets from the Neural Additive Models (NAM) work [Agarwal et al 2021], and the Click, Epsilon, Higgs, Microsoft, Yahoo and Year datasets from [Chang et al, 2021]. The result of this is summarized in Table 1. For MIMIC2, Credit and COMPAS we report the AUC (higher is better). For Epsilon, Higgs and Microsoft, we report error rate (lower is better). For the remaining regression tasks we report MSE (lower is better). The best interpretable model is in **bold**.\n\n### Table 1\n\n| Method | MIMIC2 (AUC) | Credit (AUC) | COMPAS (AUC) | Click (ERR) | Epsilon (ERR) | Higgs (ERR) | Microsoft (MSE) | Yahoo (MSE) | Year (MSE) |\n| -------------------- | ------------- | ------------- | --------------- | -------------- | -------------- | -------------- | ---------------- | -------------- | ----------- |\n| NAM | 0.8539 | 0.9766 | 0.7368 | 0.3447 | 0.1079 | 0.2972 | 0.5824 | 0.6093 | 85.25 |\n| NODE-GAM | 0.832 | 0.981 | 0.742 | **0.3342** | 0.1040 | 0.2970 | 0.5821 | 0.6101 | 85.09 |\n| LinearSPAM (Ord 2) | 0.8514 | 0.9836 | **0.7426** | 0.3791 | **0.1011** | 0.2881 | 0.571 | 0.5923 | 81.306 |\n| NeuralSPAM (Ord 2) | **0.8664** | **0.9850** | 0.7411 | 0.3348 | 0.1020 | **0.2750** | **0.5671** | **0.5869** | **79.99** |\n| XGBoost | 0.843 | 0.978 | 0.744 | 0.3334 | 0.1112 | 0.2328 | 0.5544 | 0.5420 | 78.53 |\n\n\n\n## Comparison with $NA^2M$\n\nWe compare NeuralSPAM with $NA^2M$ (NAM with pairwise features) in Table 2. SPAM by itself doesn’t always beat $NA^2M$ (due to the lack of nonlinearities), NeuralSPAM outperforms $NA^2M$ on all but 1 dataset. On CoverType, we believe that NeuralSPAM has too few parameters compared to $NA^2M$ and hence underfits severely. When we increase the number of subnets to 8 for NeuralSPAM we obtain an accuracy of 0.9022 on Order 2, whereas it is impossible to train with 8 subnets for $NA^2M$ for order 2 due to memory usage. For CH dataset, we report MSE (lower is better). For the rest, we report AUC or ACC (higher is better).\n\n\n### Table 2\n\n| Method | CH (RMSE) | FICO (AUC) | CovType (AUC) | CUB (ACC) | iNat (ACC) | \n| -------------------- | ---------- | ----------- | ---------------- | ------------- | ---------- |\n| NAM (Order 2) | 0.4921 | 0.7992 | 0.8872 | 0.7713 | 0.4591 | \n| NeuralSPAM (Order 2) | 0.4914 | 0.8011 | 0.7770 | 0.7762 | 0.4689 | \n\n\n\n\n## Effect of sharing bases in SPAM in multi-class problems\nWe observe a consistent improvement from sharing bases on all datasets due to the reduction in the number of parameters. We report accuracy (higher is better).\n\n### Table 3\n\n| Method | News (ACC) | CUB (ACC) | iNat (ACC) | Comm Obj.(ACC) |\n| ------------------------------- | ------------- | ---------- | ------------- | ---------------- |\n| LinearSPAM without shared bases | 0.8334 | 0.7575 | 0.4202 | 0.2195 |\n| LinearSPAM | 0.8472 | 0.7786 | 0.4605 | 0.2361 |\n\n\n\n\n## Runtime comparison of LinearSPAM and NeuralSPAM with NAM, MLP \n\nWe report the throughput (in terms of samples per second) for 4 datasets on different models. The summary of this comparison is that Linear versions of SPAM perform significantly faster than NAM and MLPs, even at order 3. For NeuralSPAM, we observe that it is faster than NAMs (order 2), however the additional parameters indeed decrease the throughput.\n\n### Table 4\n\n| Method | CH | FICO | CovType | News |\n| -------------------- | -------- | -------- | -------- | -------- |\n| NAM | 5x10^5 | 1.2x10^5 | 8x10^4 | 23 |\n| NAM (Order 2) | 1.1x10^4 | 6000 | 3000 | \\- |\n| LinearSPAM (Order 2) | 6.1x10^7 | 6.7x10^7 | 6.1x10^7 | 2.6x10^6 |\n| NeuralSPAM (Order 2) | 1.7x10^5 | 7912 | 4103 | \\- |\n| LinearSPAM (Order 3) | 3.2x10^7 | 3.7x10^7 | 3.9x10^7 | 1.8x10^5 |\n| NeuralSPAM (Order 3) | 1.1x10^5 | 5322 | 2681 | \\- |\n| MLP (Small) | 1.3x10^7 | 1.3x10^7 | 1.3x10^7 | 2.2x10^5 |\n| MLP (Big) | 4.6x10^6 | 5x10^6 | 4.5x10^6 | 5.4x10^4 |\n",
" The paper proposes to make Generalized Additive Models (GAMs) scalable, by tensor rank decompositions of polynomials. Specifically, the traditional GAMs are re-written into the tensor computation form. The weight matrices are then processed with rank decomposition, making the GAMs a series of inner products and are more computationally efficient. The authors also propose several tricks to help learning, including feature rescaling, sharing basis across classes, and extending GAMs with non-linear operations. Experiments under different data domains are conducted, including MLPs and CNNs. The experiments make use of the concept bottleneck backbone to further improve its performances. Human subject evaluations are also included since this paper is related with interpretability. This is a well-written paper, with in-depth understanding of GAMs and comprehensive experiments. Strengths\n1. The paper addresses an important problem in interpretable machine learning, i.e., how to build efficient and effective models besides interpretability. The paper chooses GAMs as prototypes, making a smart application of tensor decomposition to make GAMs more efficient. The paper is very well-written.\n2. The paper considers a series of extensions for the proposed SPAM models. These extensions are consistent with the core contribution.\n3. Comprehensive experiments are conducted. With both quantitative analysis and human studies.\n\nWeaknesses\n1. I would avoid adding \"fully-interpretable models\" in the abstract. Is SPAM really \"fully-interpretable\"? I think \"inherently interpretable\" could be better.\n2. I think adding some more analysis about the learned SPAM models (i.e., which features or feature interactions are important) could help readers understanding what has been learned inside models for some of the tasks. Please see the Weakness part. The paper proposes a series of extensions of SPAM to address some possible limitations.",
" This paper proposes SPAM which extends Generalized Additive Models (GAM) with polynomials. SPAM learns a polynomial model that can capture any-order interactions among features. The authors leverages the proved symmetric property and assumed low-rank property of the weight matrices to simplify the optimization problem. For multiple-class classification tasks, the authors propose to use shared bases to further speed up training. The authors show under some assumption, the generalization error of SPAM scales exponentially w.r.t. the chosen low-rank order. The authors conduct experiments with several datasets and show that proposed SPAM achieve better results than various of baseline models. The authors also conduct human evaluation on how good SPAM is comparing with a black box model and the results show SPAM achieve much better results than using a black box model with post-hoc explanation. Strengths:\n- The idea of using polynomials to improve GAM is novel. The authors address the challenges of using polynomials well (mainly, scalability) by leveraging some nice properties of the problem.\n- The authors provide thorough analysis for the proposed models. The charaterization of the generalization error, the geometric rescaling, data preprocessing, and shared bases idea for multi-class problems help the readers get a deeper understanding of the proposed model.\n- The experimental section is thorough. Other than regular comparsion with baseline methods, the authors check the effects of order, sparsity assumtion for higher order interactions and verify the generalization error empirically. The human evaluation setting is novel and provide more evidence on why a fully-inpretably model might be better.\n\nWeaknesses:\n- For Table 1, I think the authors can include results for NA^2M since SPAM at lease uses order-2 interactions. - Can you show some empirical resutls about the effects of using shared bases? I'm curious to see how much that trick would improve. Do you think it makes sense to use a shared bases for $u_1, u_{2i}, ..., u_{ki}$ in equation 3? - I think it's worth noting the running time of the SPAM on different datasets to help understand the scalability of SPAM. I guess for extremely high-dimension datasets (like high-resolution images or videos or texts), SPAM might not be able to train efficiently. It would be great if the authors can provide some discussions on this part.",
" This paper proposes a fully-interpretable model which is in the line of generalized additive models (GAMs). In order to let the fully-interpretable model scalable, the authors reduce the conventional GAMs to a polynomial one. By doing tensor decomposition, the proposed model can be effortlessly scalable and model higher-order feature interactions without a combinatorial parameter explosion. Experimental results validate the effectiveness of the proposed method both in the perspectives of prediction and human evaluation. Theoretical results also guarantee their model’s performance. Strengths: \n1. The technique of this paper is sound. The authors provide a clear view and logic of the proposed idea with the basic form of the polynomial additive models and how it can be scalable by low-rank decompositions. The improving on geometric rescaling and shared bases for multi-class problems also make sense.\n2. So is the clarity of this paper according to the above comments. \n3. The human subject evaluation is also convincing. \n\nWeaknesses: \n1. It would be better to provide the details in the human subject evaluation. For example, the authors could provide the explanation provided by the models in the bird prediction task. Then we may have a better understanding on how the model is interpreted. \n2. The significance of the proposed method seems fair. The proposed method is not post-hoc, which means the prediction and interpretation all count on the proposed method. Table 1 shows pretty good prediction results where the proposed method can mostly outperform all the baselines including DNN. But it seems a bit tricky since the DNN is just a simple MLP. Outperforming an MLP may not be a big advantage. In addition, what are the reasons to choose the four datasets presented in the paper? I am concerned that the results are cherry picking so that the proposed method cannot work well in prediction. If so, a slightly better interpretable ability may not be worth applying to real world tasks. Good prediction plus post hoc explanation may be sufficient.\n What if we continue to increase the length of explanation? Is it the 7 global optimal in Figure 2C? Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"E7d2EfVz4gx",
"67_Nt8JxPUs",
"2mFqRLodPhT",
"E7d2EfVz4gx",
"67_Nt8JxPUs",
"9sFHjci-PcJ",
"nips_2022_TwuColwZAVj",
"nips_2022_TwuColwZAVj",
"nips_2022_TwuColwZAVj",
"nips_2022_TwuColwZAVj"
] |
nips_2022_kHeotl7q9dU | NS3: Neuro-symbolic Semantic Code Search | Semantic code search is the task of retrieving a code snippet given a textual description of its functionality. Recent work has been focused on using similarity metrics between neural embeddings of text and code. However, current language models are known to struggle with longer, compositional sentences, and multi-step reasoning. To overcome this limitation, we propose supplementing the query sentence with a layout of its semantic structure. The semantic layout is used to break down the final reasoning decision into a series of lower-level decisions. We use a Neural Module Network architecture to implement this idea. We compare our model - $NS^3$ (Neuro-Symbolic Semantic Search) - to a number of baselines, including state-of-the-art semantic code retrieval methods, such as CodeBERT, CuBERT and GraphCodeBERT, and evaluate on two datasets - Code Search Net (CSN) and Code Search and Question Answering (CoSQA). On these datasets, we demonstrate that our approach results in higher performance. We also perform additional studies to show the effectiveness of our modular design when handling compositional queries. | Accept | The paper studies the problem of retrieving code snippets given textual queries (NS3, Neuro-Symbolic Semantic Code Search). The work is motivated by language models’ limitations on encoding longer and not providing a faithful explanation of their reasoning on compositional queries. NS3 supplements the query sentence with a layout of its semantic structure, which is then used to break down the final reasoning decision into a series of lower-level decisions. NS3 outperforms baselines on CodeSearchNet and CoSQA.
Overall, the reviewers liked the motivation of the work. A lot of concerns have been raised with extensive responses from the authors. Some of these concerns remain and have to be acknowledged and clarified in the modified document. **Despite the limitations of the work, I think "accepting" this work outweighs "rejecting" it, assuming that the authors will put due diligence into improving their drafts based on their latest experiments (outlined in the author's response), along with a clear discussion of their limitations.**
Let me start with the strengths:
- All reviewers have found the work interesting.
- Strong empirical results: NS3 is evaluated on two well-established benchmarks, CodeSearchNet and CoSQA. The empirical result is very strong. The proposed method, NS3, outperforms state-of-the-art by a large margin.
- The paper includes detailed experiments on reduced dataset settings and ablation studies.
Here are several points of concern that came up in the reviews:
- NS3 requires rule-based parsing of natural languages (unlike other LM-based baselines such as CodeBERT). The difficulty of this construction ("roughly two weeks") brings up several concerns:
- These rules might not generalize to different programming languages (e.g. Python to C++): on this point, the authors have reported some evidence of generalization though the reviewer "kn2F" has viewed it as "low (<=41%)" and not convinced. I suggest the authors report these analyses in their main draft. I suspect these numbers need to be compared with the corresponding numbers of their end-to-end baselines.
- These rules might not transferrable to different **natural** languages. The authors have acknowledged this limitation. I suggest the authors be explicit about such limitations in their work.
- These rules might not generalize to longer queries; this is acknowledged in the author's response. I suggest the authors be upfront about these issues in their draft.
| train | [
"XFTppm2CTqW",
"ZhdkB4D_90",
"ZsM7v9gRmsq",
"L8fcn8paQNp",
"HQ9QwGGzPJz",
"cosyre0Pld5",
"VWfeOZlB_O",
"6LNiC6sG3jV",
"BNLkhGqoghr",
"RELTsPKGqXP",
"a9LqXGV2RdC",
"OGmzPbl2eWl",
"65_eFSsW54o",
"haZvCLBEQ-uH",
"cBleEJpIKT",
"HR3y86wZ_Pa",
"6z8_cqWb714",
"3jYg0KpyF6",
"ynjqfwA6Cl",
"NCR_oclKm2O",
"Zn488xx0lo",
"HjCv-s3_HrU",
"ndfG3I7-W_l",
"REjXBoVvCM",
"ZNF0I8gzSWU",
"bzEuFDADzfa"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my concerns regarding parsing rules and providing additional experiments about the parser success rate on other datasets. However, given that the parser success rate on other CodeSearchNet datasets is still low (<=41%), a user still needs to add NL parsing rules when they use the proposed method on other datasets. Moreover, training a language model on another natural language is automatic and requires much less effort by NLP experts than writing parsing rules. So I am not changing my review scores at this time.",
" Thank you for the explanations and the additional experiments to resolve my major concerns. The paper now is much better with details to support the claim. I increased my score to 5.",
" Dear Reviewer, \n\nWe wanted to use this last chance to check in and see whether you found our following clarifications and updates satisfactory:\n\n- We have answered your questions about the parser and provided additional experimentation that shows how our parser can successfully generalize to new datasets and new programming languages. \n\nWe encourage you to refer to our previous comments for details on these.\n\nBest regards, \n\nAuthors\n",
" Dear Reviewer, \n\nWe wanted to use this last chance to check in and see whether you found our following clarifications and updates satisfactory:\n\n- In response to your concern about the mismatch between one of our claims and the experimental results, we have clarified the corresponding claim according to the results and updated the paper. Please see our previous comment to you, as well as the general comment on this matter for details.\n- In response to your conclusion about the sample selection bias in our evaluations, we have pointed out a misunderstanding of the experimental setup in one of our baselines which led to the incorrect conclusion.\n- Additionally, following your suggestion, we have provided experimentation results on the full dataset to demonstrate the performance gain is maintained in the absence of sample selection.\n- We have answered your questions about the parser and provided additional experimentation that shows how our parser can successfully generalize to new datasets and new programming languages. \n\nWe encourage you to refer to our previous comments for details on these points.\n\nBest regards, \n\nAuthors\n",
" Thank you for your responses and clarifications! They clarify some of my questions. Based on a clearer understanding of the paper, I decided to keep my score at 5.",
" Dear Reviewer, \n\nWith the nearing end of the discussion period, we wanted to reach out again and check if you were satisfied with our answers below, as well as see if there are any additional questions or concerns we could address for you.\n\nBest Regards, \n\nAuthors\n",
" Dear Reviewer, \n\nWith the nearing end of the discussion period, we wanted to reach out again and check if you were satisfied with our answers below, as well as see if there are any additional questions or concerns we could address for you.\n\nBest Regards, \n\nAuthors\n",
" Dear Reviewer, \n\nWith the nearing end of the discussion period, we wanted to reach out again and check if you were satisfied with our answers below, as well as see if there are any additional questions or concerns we could address for you.\n\nBest Regards, \n\nAuthors\n",
" Thank you for your clarifications, especially that the modifications of the paper were easy to see in blue.\n\nI also think that the evaluations got much better with the additional experiments and information. What could make the paper even better would be to figure out why the method performs worse with deeper queries and maybe improve the method.\n\nTwo small observations:\n- maybe it would be helpful to mention explicitly in Section 3.3 that for masked tokens, only the verb-preposition embedding is included, and the entity discovery module embedding is not. It can of course be deduced, but I think it could be helpful.\n- there is a new typo on line 32: loosing => losing\n\nI increased my score to Accept.",
" Dear Reviewer, \n\nWe wanted to follow up to check whether our responses below have addressed the concerns you have specified in your review. To briefly summarize the points covered in our response:\n\n- We have answered your questions about the process that went into the implementation of the parser.\n- We have provided additional experimentation that shows how our parser can successfully generalize to new scenarios. \n\nDo not hesitate to let us know if you have any follow-up questions that we could further answer. \n\nBest regards, \n\nAuthors\n",
" Dear Reviewer, \n\nWe wanted to follow up to check whether our responses below have addressed the concerns you have specified in your review. To briefly summarize the points covered in our response:\n\n- We have addressed your concern about the mismatch between our claim and experimental results, clarified the corresponding claim, and updated the paper.\n- We have answered your questions about the evaluation setup, and the rationale behind our setup.\n- We have provided experimentation on the full dataset to demonstrate the performance of the models without additional sample selection biases.\n- We have answered your questions about the parser and provided additional experimentation that shows how our parser can successfully generalize to new scenarios. \n- Following your suggestions, we have made some other changes to the paper, such as providing additional statistics you had requested.\n\nDo not hesitate to let us know if you have any follow-up questions that we could further answer. \n\nBest regards, \n\nAuthors",
" Dear Reviewer, \n\nWe wanted to follow up to check whether our responses below have addressed the concerns you have specified in your review. To briefly summarize the points covered in our response:\n\n- We have addressed your concern about the mismatch between our claim and experimental results, clarified the corresponding claim, and updated the paper.\n- We have pointed out a small misunderstanding of the experimental setup in one of our baselines which led to an incorrect conclusion about the prevalence of the sample selection biases in our evaluation. Additionally, by your suggestion, we have provided experimentation results on the full dataset to demonstrate the performance gain is maintained in the scenario without any biases.\n- We have answered your questions about the parser and provided additional experimentation that shows how our parser can successfully generalize to new scenarios. \n\nDo not hesitate to let us know if you have any follow-up questions that we could further answer. \n\nBest regards, \n\nAuthors\n",
" Dear Reviewer, \n\nWe wanted to follow up to check whether our responses below have addressed the concerns you have specified in your review. To briefly summarize the points covered in our response:\n\n- We have addressed your questions about the model, and implementation.\n- We have followed your suggestions for making the paper clear and have revised the Introduction and Section 3.3. for that purpose.\n- We have addressed your questions about the model performance behavior on queries of larger depth.\n\nDo not hesitate to let us know if you have any follow-up questions that we could further answer. \n\nBest regards, \n\nAuthors\n",
" \nWe are grateful for your positive comments and thoughtful suggestions. We are pleased to hear that you think the proposed method is interesting and novel. We understand the concerns you raised about the clarity of the writing. We have fixed typos, added citations, and revised the paper to expand on method ideas in the introduction, and improve the clarity of Section 3.3. We include more discussion about the intuition behind the two modules and the high-level ideas of how the module's outputs are used to compute relatedness scores. Sec 3.3 is revised to elaborate more on the module’s implementation details. Please see our general response for the detailed list of changes made to the paper. We hope that you find the overall quality of the paper has improved with this revision. \n\nIn this response, we clarify implementation details about the action module (Sec 3.3) and address questions about batching and performance at different depths.\n\n### **Clarification of the details of the action module (Sec 3.3) (Weakness 2 & Q3)**\n\n- __Modeling “preposition”:__ For each entity, there can be zero or one preposition, but each verb may have multiple entities with prepositions. Here, Load 0 indicates that there is no preposition associated with “all tables”. This design decision becomes more clear in a scenario where both entities have an associated preposition, e.g. “Load from table to file.”, In that case, the entity “table” would be matched with “load from”, and entity “file” would be masked, and the input would only contain the embedding for “load to”. We are sorry for causing confusion about this, in Figure 2 we truncated that part for brevity, and also to avoid overloading the figure with too much detail.\n- __Handling multiple entities & confusion in Figure 4:__ We handle multiple entities by having distinct copies of the module with the corresponding dimensionality for different numbers of input entities (LL202-204 in the revised version of the paper). In our current implementation, these modules do not share weights, but in a future implementation, we believe it might be beneficial to change that. To form the input vector for the Action module, we concatenate the code token embedding (dim=768), and, depending on the number of inputs one or more join embeddings of verb and preposition (dim=7) with the corresponding output score of the entity discovery module (dim=1). So, the final dimensionality of the input vector to the action module is 768 + 8*num_inputs. \n- __Updates to the Method description in Section 3.3:__ The details you queried about were missing from the paper, so we have updated Section 3.3 to include information about handling multiple inputs to the action module, and provide a clearer explanation behind our handling of prepositions.\n\n### **Why is batching hard for the model? (Q1)**\n\nThis is because our model has dynamic architecture on different query-candidate pairs since we use the semantic parse of the query to construct the layout of the network. As a result, we have a different network architecture from one query to another, Thefore, the computations for both forward and backward passes for every example are different. \n\n### **Performance of our method at different query depths (Q2)**\n\nThank you for raising this great question! We agree that it is interesting to investigate why our method performs worse on queries of larger depths (e.g., depth=2/3). A related observation is that the baselines perform similarly on all depths, which shows that there is no significant difference in terms of difficulty on these different subsets of queries. One hypothesis is that more compositional queries are also harder to parse for the parser, having a higher chance of ending up with a noisy or incorrect parse among longer queries. This in turn affects our model both during learning and inference. We didn’t validate this hypothesis via experiments and plan to do this in the final version. \n\n### **The \"Background\" section makes the \"Related work\" section redundant (Weakness 3)**\n \nOur intention for the background section was to provide preliminary information and formal details, not just limitations. We are grateful for pointing out this inconsistency and will work to remove the redundancy between the two sections in the paper.\n",
" Thank you so much for providing a careful assessment of our paper. We are happy to hear that you found our approach of combining Transformers with semantic structure from the CCG parser interesting. We hope to have addressed some of your concerns in the general comment to all reviewers, as well as in the comments below. \n\n### **Performance comparison on the full test set (i.e., including the unparsable instances) (Limitation 1 & Weakness 3)**\n\nThank you for this thoughtful suggestion! We agree that reporting results on the full test set (i.e., including unparsable instances) is helpful and important. For the CoSQA dataset, we have performed experiments evaluating the model performances on both parsable and unparsable parts of the test portion of the dataset. The results are shown in the Table below. As you can see, while there is some shift in performance on the full dataset, we still see improvement in the application of NS3. This table has also been added to Appendix, section C.3.\n\n|Method|MRR|P@1| P@3|P@5|\n|---------|------|-----|-----|-----|\n|CodeBERT| 0.29 | 0.152| 0.312 | 0.444 |\n|GraphCodeBERT| 0.367 | 0.2 | 0.447 | 0.561 |\n|NS3| 0.412 | 0.298 | 0.452 | 0.535 |\n\nFurthermore, we want to clarify what we believe to be a misunderstanding of the experimental setups in our Table 1 and GraphCodeBERT paper. In our work, we follow the setup used in CodeBERT, which includes ranking the correct code snippet among 999 distractors (LL263-267). However, in GraphCodeBERT both the dataset, and the experimental setup are a little different. Firstly, they remove what they consider to be noise from the dataset, so they operate on a slightly different set of queries. Next, instead of using 999 distractors, they evaluate against a larger number of distractors, which explains why the MRR scores reported in their paper are generally lower (0.692) than those that are reported by CodeBERT and us (0.812). But the scores that we have obtained for CodeBERT on just the parsable data in CodeSearchNet are comparable to those reported in the CodeBERT paper (0.873 vs 0.868), which suggests that the parsable examples are not significantly easier for the models. \n\n### **Applicability of the method (parser) in arbitrary natural language scenarios (Weakness 1)**\n\nWhile we agree that the parser poses a limitation to the applicability of our method, we want to stress that even the current implementation of the parser has demonstrated the ability to generalize outside of the dataset and the programming language for which it was created. In the paper we were able to apply the parser to CoSQA dataset without modifications, having developed it using only the training portion of the CodeSearchNet dataset. Additionally, in the table below we demonstrate the evaluation of the parser on a new dataset of Python code and English queries, as well as other datasets of 5 different programming languages and their corresponding English queries. As it can be seen from the results below, in all these cases the parsable portion of the queries is non-trivial, and our model has the potential of offering a measurable improvement on those queries. We invite you to refer to our general comment on the topic of parser for a broader discussion of the matter.\n\nFinally, we want to highlight, that it is always possible to output the results of the first-stage ranking model, CodeBERT in our case, if the query could not be parsed, still getting the boost in performance on the queries that can be parsed. This analysis has been added to Appendix, Section A.4, we also include the summary of the evaluation results table below for your convenience.\n\n|Language | Dataset | Parser Success Rate|\n|------------|----------|-------------------------|\n|Python| CoNaLa auto-mined | 0.62 |\n|Python| CoNaLa manual (train) | 0.65 |\n|Python| CoNaLa manual (test) | 0.63 |\n|Go| CodeSearchNet | 0.32 |\n|Java| CodeSearchNet | 0.33 |\n|Javascript| CodeSearchNet | 0.41 |\n|PHP| CodeSearchNet | 0.43 |\n|Ruby| CodeSearchNet | 0.35 |\n\n### **Elaboration on the influence of the parsable sample selection on the dataset? (Q1)**\n\nFirst of all, we would like to highlight the point from above, that the trends of model performances remain the same when evaluating on all examples. In addition, Appendix Section A.3 and Table 4 have a discussion on noisy samples in the datasets. To summarize briefly, during manual observations of failed parses we have found that a lot of the sentences that could not be parsed were not in fact queries of code, but rather things like URLs, signatures of functions, or comments unrelated to the semantics of the functions. And finally, we want to stress again that for the large portions of the dataset that we could parse we improve the performance by a significant margin, so combining our method as a second step on top of another base method, e.g. CodeBERT, whenever parses are available is a simple way to garner that improvement. ",
" ### **Inconsistent claim in the paper, as the model’s performance does not improve on deeper queries (Weakness 2)**\n\nThank you so much for raising this crucial point. After careful consideration, we have updated the wording in the paper so that the claims reflect more precisely the strengths of our approach. We hope that you find our changes have made our submission better. Specifically, our concern is about the inability of existing models to obtain a representation of text that remains faithful to the details and query’s semantics. We demonstrate that NS3 achieves such faithfulness to details in the study in Figure 7, where we replace a single entity or a single action in samples and show that NS3 is more sensitive to this small change in semantics. We also have included additional experiments in the Appendix (Figures 9 and 10), which show that CodeBERT’s performance is roughly the same even when dealing, with unseen actions and unseen entities, which confirms again that individual unknown actions or entities have low importance in the CodeBERT model while making a prediction, thus violating the faithfulness of the representation. \n\nWe suspect that the performance drop in longer queries could be due to the fact that more compositional queries are also harder to parse for the parser, having a higher chance of ending up with a noisy or incorrect parse among longer queries. This in turn affects our model both during learning and inference. \n\nPlease take a look at our general comment on the claim about handling longer texts for additional information and broader discussion.\n\n### **Is there any sample validating that the NS3 model mimics the staged reasoning for SCS? (Q2)**\n\nThank you for raising this important point, we agree that including more evidence of NS3’s reasoning steps will help improve the paper. We have updated Appendix, Section D with a demonstration of performance on an example with multiple steps. ",
" We want to thank you for your insightful comments and suggestions. We have attempted to address a number of concerns that you have expressed in the general comments to all reviewers, and hope that you find our answers satisfying. Additionally, we have updated the paper to address the issue with regards to the correctness of our claim of the model performance on long and compositional text, we have rephrased the claim to focus more on the faithful representation of the details in the query. \n\n### **Inaccurate claim on the advantage of the proposed method (Weakness 1)**\n\nAs you have correctly pointed out, one of the main claims in the paper, regarding encoding longer queries, was not backed by the experimental results. We have updated the paper to make the claim more true and correct, by specifying that we are interested in a representation of the query that is faithful to the details and its semantics. In other words, details or parts of the query are not misrepresented or omitted. We believe that from our experiments in Figure 7, it can be observed that NS3 is in fact obtaining a more faithful representation since it is more sensitive to small, but semantically important changes to the query. We discuss this more broadly in our general comment on the topic.\nWe are also working to include longer and more compositional examples in the paper and will update the draft accordingly in the next couple of days.\n\n### **Runtime and best usage scenarios (Weakness 2)**\n\nThere is some tradeoff of efficiency for performance during inference, but we believe it is not prohibitively large for real-world retrieval scenarios. We have performed a small benchmarking experiment, and on an NVIDIA Tesla V100 GPU CodeBERT takes 0.25 seconds to process 1000 samples, and NS3 then takes 0.45 seconds to perform the re-ranking of the top 10 samples. The overhead of training a new neural model is a one-time effort, and we spent between 10 and 15 hours training the model on an NVIDIA Tesla V100 GPU.\nYour comment about NS3 being used as a re-ranker is on-point, it is indeed the use case we have in mind for our model (LL237-242; LL263-267). For that reason we have also included a similarly evaluated re-ranking baseline with a different model in the second stage, these results are referred to as “GraphCodeBERT*” in Tables 1 and 2. \nFinally, we wanted to bring your attention to the fact that NS3 can be used on top of any base retrieval method for filtering the pool of candidates, not necessarily just CodeBERT.\n\n### **Evaluation setup and results on the full test set (Q1)**\n\nWe have excluded portions of the data that we could not parse because it is always possible to fall back to an end-to-end model, such as CodeBERT, in cases when the parser fails. Our parser can successfully parse a measurable portion of both datasets, about 40% of CSN and 70% of CoSQA, and NS3 shows a non-trivial improvement on the parsed portion, about 0.05 MRR on CSN and 0.2 MRR on CoSQA, so overall its application will be beneficial. We have also updated Appendix A.2, Table 3, to include the full dataset statistics before parsing.\n\n|Dataset|Train|Valid|Test|Full Train|Full Valid|Full Test|\n|-|-|-|-|-|-|-------------|\n|CodeSearchNet|162801| 8841| 8905| 412178| 23107| 22176|\n|CoSQA|14210 |-| -| 20,604 |-| -|\n|WebQueryTest|- |- |662| - |- |1046|\n \nIn addition, we have performed experiments on the full CoSQA test set, i.e. including both parsable and unparsable examples. As it can be seen from the results table below, while there is a shift in performance, the overall trends of the performance remain the same, and NS3 still offers improvement when used on top of CodeBERT. We have also updated Appendix C.3 accordingly, to include this experiment. \n\n|Method|MRR|P@1| P@3|P@5|\n|---------|------|-----|-----|-----|\n|CodeBERT| 0.29 | 0.152| 0.312 | 0.444 |\n|GraphCodeBERT| 0.367 | 0.2 | 0.447 | 0.561 |\n|NS3| 0.412 | 0.298 | 0.452 | 0.535 |\n",
" ### **The query parser implementation seems to be a lot of human work (Weakness 3)** \n\nThe parser rules were developed by us without any prior experience with CCG parsing. It was performed through a few iterations of manual assessment of the parsing results obtained on the CodeSearchNet dataset’s training set, and adjusting the parsing rules accordingly. The majority of the process took roughly two weeks. We want to note that we did not perform further changes to the parser depending on the performance of the NS3 model on the development or testing sets of either dataset. The resulting parser was robust enough to be applicable to CoSQA dataset without modifications. In addition, we have performed more experiments evaluating the parser on both queries concerning Python code, as well as code in five other programming languages (Go, Java, Javascript, PHP, Ruby). As it can be seen, the parser can parse 62% of the queries about Python, and at least 32% of queries about other languages, with PHP being the highest at 42%. Please refer to our comment to all reviewers for a broader discussion on the generality of the parser. \n\n|Language | Dataset | Parser Success Rate|\n|------------|----------|-------------------------|\n|Python| CoNaLa auto-mined | 0.62 |\n|Python| CoNaLa manual (train) | 0.65 |\n|Python| CoNaLa manual (test) | 0.63 |\n|Go| CodeSearchNet | 0.32 |\n|Java| CodeSearchNet | 0.33 |\n|Javascript| CodeSearchNet | 0.41 |\n|PHP| CodeSearchNet | 0.43 |\n|Ruby| CodeSearchNet | 0.35 |\n\n### **Did you randomly select 5k and 10k examples? (Q2)**\n\n5K and 10K sample scenarios were sampled randomly, and for those experiments, we do not report the average over multiple subsamples.\n\n### **Why did you focus on the two-step evaluation? (Q3)**\n\nRegarding us performing two-stage retrieval for evaluation, we want to mention that two-stage retrieval is not uncommon in information retrieval with a less precise, but faster method filtering a large number of examples, and a second, slower but more precise, method re-evaluating most likely candidates to make the final decision. You are correct in your understanding that both CodeBERT and GraphCodeBERT can be used in just 1 step. But as we see from our results in Tables 1 and 2, there is an improvement that can be obtained by applying NS3 as a second stage. The reason we did not run our model in just 1 stage is resource efficiency. Our model uses dynamic architecture (LL142-163), which makes it harder to batch examples together and process multiple examples at once because from one example to another we end up with a different model architecture. \n\n### **Why does GraphCodeBERT perform much worse than CodeBERT in Figure 5? (Q4)**\n\nFrom our results there is no way to conclusively decide whether GraphCodeBERT is a stronger baseline than CodeBERT; in particular, in Figure 5 that you mention, GraphCodeBERT is stronger on CoSQA. \n",
" Thank you so much for bringing up many important points, and appreciating the care we put into designing our experimental setting and ablation studies. We agree with your concerns regarding the parser and hope to have addressed most of them below, as well as in the general comment to all reviewers.\n\n### **Potential of the parser to generalize to new datasets and other programming languages (Weakness 1)**\nAfter developing the parser on the CodeSearchNet dataset’s training portion, we did not perform additional changes to parse the CoSQA dataset. In other words, the parser that was created for CodeSearchNet was robust enough to be applied to another dataset without significant modifications. In addition, we have performed a study on new datasets (Appendix A.4), to evaluate the performance of the parser on a) an unseen dataset of English queries on Python code, and b) unseen datasets of English queries for code in other programming languages (Go, Java, JavaScript, PHP, Ruby). According to this experiment, we were able to parse at least 62% of Python code queries, and at least 32% of queries for other languages, with PHP being the highest at 43%. \n\n|Language | Dataset | Parser Success Rate|\n|------------|----------|-------------------------|\n|Python| CoNaLa auto-mined | 0.62 |\n|Python| CoNaLa manual (train) | 0.65 |\n|Python| CoNaLa manual (test) | 0.63 |\n|Go| CodeSearchNet | 0.32 |\n|Java| CodeSearchNet | 0.33 |\n|Javascript| CodeSearchNet | 0.41 |\n|PHP| CodeSearchNet | 0.43 |\n|Ruby| CodeSearchNet | 0.35 |\n\nIn future work, it is possible to alleviate the need for the parser altogether, by asking the users of code search to formulate their semantic queries in some formal way instead, for example, following the syntax of SQL or CodeQL queries.\n\n### **Potential of the parser to generalize to new natural languages, e.g. English->Chinese (Weakness 1)** \n\nAs you have correctly noted, generalizing to a new language, such as Chinese instead of English is infeasible for the current implementation of our semantic parser. However, most models, including CodeBERT or GraphCodeBERT would have to be retrained from scratch for that scenario, requiring a lot of additional work, so we do not believe such a scenario imposes a limitation unique to just our approach. \n\n### **Engineering and selection of parsing rules (Q1)** \n\nRules for the parser were indeed created manually. In the process of rule-creation, we used the training portion of the CodeSearchNet dataset. This process was guided by a qualitative assessment of the output parses on the training portion of the CodeSearchNet dataset. We want to highlight, however, that we did not perform additional changes to the parser based on the performance of the full NS3 model on either development or test sets.",
" Dear Reviewers, \n\nWe want to thank you for your careful comments and valuable suggestions. We were pleased to hear that the **following strengths** of our work have caught your eye: \n- **kn2F**: very strong empirical results, including detailed evaluations and ablation studies.\n- **FqUC**: strong performance on an interesting problem\n- **dTG5**: strong performance achieved with an interesting solution using a combination of symbolic information from CCG parse and neural approach based on Transformer\n- **SyPM**: strong performance achieved on an interesting and novel application of neural module networks to a novel problem.\n\nWe also are very grateful for constructive criticisms - we have received a number of great questions and suggestions for improvement of the paper. We hope that **our answers will clear your concerns** that we have outlined below, and you will find that the overall quality of our submission has improved after making the following changes:\n- **kn2F, FqUC, dTG5**: performance of the parser as well as its generalization capacity. We have updated Appendix Section A.4 to include additional evaluations of the parser on a number of new code-search datasets. The datasets are comprised of English text and code in Python, as well as 5 other programming languages (Go, Java, Javascript, PHP, Ruby).\n- **FqUC, dTG5**: how parsable vs un-parsable examples skew the performance of the models. We have updated Appendix Section C.3 to include the performance evaluation for our model, CodeBERT, and GraphCodeBERT, when evaluated on the entire dataset, as opposed to just its parsable portion. We have also updated Appendix Section A.2, Table 3, to include original data statistics before parsing\n- **dTG5, FqUC**: the correctness of the claim about the advantage of the proposed method for longer texts. We have rephrased the corresponding claim to specify our model focuses on the faithful representation of details of the query, and walk through experiments that evidence this.\n- **SyPM**: Clarity issues in the introduction and Section 3.3. We have updated those sections to improve the clarity of the writing and added more details about implementation to avoid confusion by the reader. We have also fixed some typos.\n\n\nFor your convenience, **the changes to the revised paper** and supplementary material (appendix) **are emphasized with blue text**. To make it more convenient to check our appendix, we only uploaded the appendix pdf file as the supplementary material. Our previous zip file with code is still accessible in the revision history. \n\nWe thank you again for taking the time to review our paper and engage in the discussion.\n\nAuthors",
" We are very grateful to the reviewers for pointing out that one of our claims, which was formulated as the ability to encode longer texts in contrast to existing models (L58), was inaccurate and inconsistent with the results. More precisely, and we have updated the wording in the paper to reflect this, our concern is about the inability of existing models to obtain a representation of the text that remains faithful to the details and query’s semantics. We demonstrate that NS3 achieves such faithfulness in the study in Figure 7, where we replace a single entity or a single action in samples and show that NS3 is more sensitive to this small change in semantics. We also have included additional experiments in the Appendix (Figures 9 and 10), which show that CodeBERT’s performance is roughly the same even when dealing with unseen actions and unseen entities, which confirms again that an individual unknown action or entity have low importance in the CodeBERT model while making a prediction, thus violating the faithfulness of the representation.\nWe have updated the corresponding claims in the Introduction, when discussing the advantages and evaluation results, as well as in Limitation in Section 2.2, and hope it makes the paper more clear and sound.\n\n",
" Rules for the parser were created manually. In the process of rule-creation, we used the training portion of the CodeSearchNet dataset. This process was guided by a qualitative assessment of the output parses on the training portion of the CodeSearchNet dataset. \nWe want to highlight, however, that we did not perform additional changes to the parser based on the performance of the full NS3 model on either development or test sets, and neither did we make changes to the parser to parse the CoSQA dataset in addition to CodeSearchNet later. In other words, the parser that was created for CodeSearchNet was robust enough to be applied to another dataset without significant modifications. \n\nIn addition, we have performed a study on new datasets (Appendix A.4), to evaluate the performance of the parser on a) an unseen dataset of English text and Python code, and b) unseen datasets of English text and code in other programming languages (Go, Java, JavaScript, PHP, Ruby). According to this experiment, we were able to parse at least 62% of English queries on Python data, and at least 32% of queries for other languages, with PHP being the highest at 43%. Full results are presented in the Appendix, below is the summary:\n\n|Language | Dataset | Parser Success Rate|\n|------------|----------|-------------------------|\n|Python| CoNaLa auto-mined | 0.62 |\n|Python| CoNaLa manual (train) | 0.65 |\n|Python| CoNaLa manual (test) | 0.63 |\n|Go| CodeSearchNet | 0.32 |\n|Java| CodeSearchNet | 0.33 |\n|Javascript| CodeSearchNet | 0.41 |\n|PHP| CodeSearchNet | 0.43 |\n|Ruby| CodeSearchNet | 0.35 |\n\nWe also want to note, that while undeniably the parser can be improved, our manual assessment of failed parses shows that a lot of those sentences do not in fact represent proper queries, and are noisy instances in the datasets. Among such instances we have discovered URLs, function signatures, unrelated comments to the semantics of the function, and others. We discuss this and provide specific examples in Appendix A.3, Table 4.\n\nAnd finally, we performed an evaluation on the full testing portion of the CoSQA dataset, to demonstrate that NS3 still improves the performance when compared to CodeBERT and GraphCodeBERT baselines. This experiment is discussed in Appendix, Section C.3 and Table 6. Below we present a summary of the table.\n|Method|MRR|P@1| P@3|P@5|\n|---------|------|-----|-----|-----|\n|CodeBERT| 0.29 | 0.152| 0.312 | 0.444 |\n|GraphCodeBERT| 0.367 | 0.2 | 0.447 | 0.561 |\n|NS3| 0.412 | 0.298 | 0.452 | 0.535 |",
" This paper proposes NS3, Neuro-Symbolic Semantic Code Search. NS3 supplements the query sentence with a layout of its semantic structure, which is then used to break down the final reasoning decision into a series of lower-level decisions. NS3 outperforms baselines on CodeSearchNet and CoSQA. Strengths:\n1. NS3 is evaluated in two well-established benchmarks, CodeSearchNet and CoSQA, and compared against strong baselines.\n2. The empirical result is very strong. The proposed method, NS3, outperforms state-of-the-art by a large margin.\n3. The paper includes detailed experiments on reduced dataset settings and ablation studies.\n\nWeaknesses:\n1. NS3 requires rule-based parsing of natural languages, while other baselines, such as CodeBERT, only involve natural language models. Rule-based parsing of natural languages is complicated, language-dependent, and less scalable than language models. In `parser_dict.py` of the released parser, there are hundreds of task-specific parsing/synonym rules for natural languages. These rules are not transferrable to different natural languages (e.g. English to Chinese), and they even cannot generalize to different programming languages (e.g. Python to C++). What are the process of engineering parsing rules and NL-action mappings? Are they engineered according to the performance on the dev set or the test set? N/A",
" The paper studies the problem of retrieving code snippets given textual queries (called semantic code search). The work is motivated by language models’ limitations on encoding longer and compositional text (which I question a bit about, see my comments below). The authors propose a neural module network (called NS3) and introduce a modular workflow according to the semantic structure of the query. More specifically, NS3 contains two types of neural modules, entity discovery module, and action module, to estimate the semantic relatedness of code tokens and entity mentions and actions in a query separately. It decomposes the task into multiple steps of matching each data entity and action in the query. The authors demonstrate the effectiveness of the proposed method on several code search benchmarks and show that their method outperforms some strong baselines in some settings (on which I’m a bit confused).\n Strengths: \n\nThe problem of semantic code search is very interesting. It is easy to follow the writing of the paper. The authors compare the proposed methods with other works and show strong performance on multiple code search benchmarks. \n\nWeaknesses:\n\nThe authors mention that one of the main motivations of the work is because language models struggle with encoding long and compositional text. I’m a bit suspicious. Text queries as examples shown in the paper (but not just these examples, generally speaking) are not very long and complicated. Some of them are even too simple and ignore some details, which leads to mismatching between the query and code (which could be the real challenge). Language models are able to encode much more complex and longer texts than these examples in the paper….\n\nAccording to what the authors say in lines 265-266, CodeBERT and many other simpler approaches can be fed examples in batches, which make them much faster in retrieval settings. In real-world retrieval settings, efficiency is also a very important consideration. In this case, the proposed method NS3 is less attractive (also depending on other methods such as CodeBERT). This makes NS3 look more like a re-ranking model instead of a fully actionable code retrieval model. \n\nThe query parser implementation seems to be a lot of human work (e.g., building a vocab of action and entity words).\n Line 246-248: I’m confused about the evaluation data statistics. Is it a common practice to exclude unparsable examples for evaluation? It looks like that the authors did that only because of the limitations of their method (only taking parsable queries). It would be helpful to provide data statistics both before (original) and after parsing in Appendix A.2 (Table 3).\n\nLine 254: Did you randomly select 5k and 10k examples? If they are randomly chosen did you report results on multiple randomly selected examples? \n\nLine 264-266: Why did you focus on the two-step evaluation? Again, do CodeBERT/GraphCodeBERT themselves (not by yourselves) also report results in this setting? It seems that they can be applied in a single-step setting. Why not evaluate your method in that setting too without depending on CodeBERT’s first step predictions (then only applying your method to rerank the top 10 CodeBERT predictions)? As you mentioned in Line 303-305, the highest possible results you could get with this evaluation strategy is kind of low (74% on CoSQA…)...\n\nLine 307- Fig. 5: Do you have any explanations about why GraphCodeBERT performs much worse than CodeBERT in many cases? Is it the strongest baseline you compare with?\n N/A",
" The paper proposes the NS3 (Neuro-Symbolic Semantic Search) model, which breaks down the query into small phrases using the Categorial Grammar-based semantic parse module, which can better understand the compositional and longer queries.\n\nTo identify the similarity of each query and code snippet, the NS3 model uses two types of neural models, the entity discovery module and the action module. The entity discovery module uses a transformer encoder model and a two-layer MLP to identify the entities and their relevances. RoBERTa model initialization and noisy supervision training are applied in its self-supervised pretraining phase. The Action module architecture is similar to the entity discovery module. It estimates the action similarity through the prediction of the masked entity embedding. The module can be pre-trained with a mask-predict process for the masked entity. After pretraining, the model performs end-to-end fine-tuning for two modules.\n\nThe experiments on the CSN and CoSQA datasets show the superiority of the proposed model over the baseline methods on the parsable samples. Furthermore, the ablation study validates the effectiveness of the pretraining and investigates different score normalization methods and similarity metrics.\n **Strengths**:\n\n1. The proposed NS3 model utilizes the Categorial Grammar-based(CCG) semantic parser to comprehend the semantic structure of the query better and combines the Transformer-based neural model to capture semantic information of the query text, which is an interesting idea.\n\n2. The experiment validates the effectiveness of the proposed modules and the pretraining strategy. Furthermore, the proposed model achieves noticeable improvements over the baseline models in the parsable samples.\n\n\n**Weakness**:\n\n1. The proposed model could not operate properly in arbitrary natural language scenarios that 60% of the CSN dataset and 30% of the CoSQA dataset records are not parsable.\n\n2. According to Line 58, the author claims the model mitigates the challenges of encoding long texts and mimics the staged reasoning for SCS. However, according to Figure 5(a), the NS3 model is not significantly improved in the deeper query situation (D=3+). The model performs better on the query with a simple semantic structure (D=1), which is inconsistent with the initial claim.\n\n3. The unparsable issue restricts the quantity and quality of the experimental data. According to Table 1 and [1], the MRR of the GraphCodeBert model is higher on the parsable dataset (0.812 vs. 0.692). The parsable data may be easier to comprehend through the NS3 model, making the experiment comparison unfair.\nReference:\n\n[1] Guo, D., Ren, S., Lu, S., Feng, Z., Tang, D., Liu, S., ... & Zhou, M. (2020). Graphcodebert: Pre-training code representations with data flow. arXiv preprint arXiv:2009.08366.\n\n 1. Could you elaborate on the influence of the parsable sample selection on the dataset? For example, are the long sentences and samples hard-to-understand remained?\n\n2. Is there any sample that can validate the NS3 model mimics the staged reasoning for SCS? \n In the experiments, according to my understanding, only parsable data are used in the evaluation. I would suggest using all the data in the experiments to verify if the proposed method still helps in improving the SCS performance. In other words, it is not fair to compare with other baselines and models using only the dataset bias towards your method.",
" The paper introduces a novel method for semantic code search using neural module\nnetworks. The layout of the network is produced from a semantic parse of the\nquery. There are two types of modules: entity discovery modules and action\nmodules. Entity discovery modules correspond to the nouns in the semantic parse\nand each such module tries to discover the given entity (noun) of the query in\nthe code (they assign a relevance score to each code token).\n\nAction modules correspond to the verbs in the query. Each action module receives\nonly part of the full query with the last entity argument masked, and tries to\ndiscover the masked entity (estimate its relevance scores). The intuition is\nthat if the code snippet indeed corresponds to the query, then the action module\nshould be able to estimate the relavence scores of the masked entity based on\nthe code and the rest of the query. Nested action modules are flattened and\ntheir scores are multiplied together.\n\nThe final score which measures the relatedness of the code snippet to the query\nis computed by taking the normalized dot product of the relevance scores of an\nentity computed by its entity discovery module and by the action module in which\nit was masked. If there are multiple such scores (because there are multiple\naction modules) they are multiplied together.\n ## Strengths\n\nThe paper is an interesting and novel application of neural module networks to\nsemantic code search and improves the state-of-the-art results.\n\n## Weaknesses\n\nI think the main problem with the paper is that some parts are hard to\nunderstand in their current form. The introduction could be more concrete with\ninformation incorporated from the caption of Figure 2 and Section 3. For\nexample, I think that the introduction should mention the intuition behind\naction and entity discovery modules, and how that is used to compute the\nrelatedness score.\n\nI found the second paragraph of Section 3.3 especially hard to understand, I\nthink it should be elaborated. One reason for my confusion was the dual role of\nprepositions: they belong to the entities, but we embed them with the verbs. I\nstill don't understand what happens when we have multiple entities as the input\ndimension of the transformer would change depending on the number of entities.\nAlso, the dimensions, and what is concatenated to what and in which direction is\nnot clear.\n\nLine 208: the code token embedding should be $c^j_k$ and not $t_k$.\n\nLine 246: \", Section 4.1,\" is somewhat confusing, \"later in this Section\" would be better\n\nLines 268-277: some of the methods are not cited.\n\nI find it odd that there is a \"Background\" and a \"Related work\" section, as\nthe Background already discusses the limitations of related work.\n\n Why is batching hard for the model? (it was mentioned in line 237)\n\nIt was mentioned that the performance of the model peaks at depth=1. Could that\nbe because of how compositional queries are handled (as described in lines\n219-228)?\n\nI found Figure 4 confusing. Why are both \"Load 0\" and \"Load from\" present in the\nfigure? On Figure 2 we just have \"Load from\" for the same action module and\nquery. Also, there is just one unmasked entity and it's without a preposition, so maybe\n\"Load 0\" would be appropriate?\n I cannot think of any limitations which are not addressed.\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"ynjqfwA6Cl",
"HR3y86wZ_Pa",
"ndfG3I7-W_l",
"ZNF0I8gzSWU",
"VWfeOZlB_O",
"ZNF0I8gzSWU",
"REjXBoVvCM",
"ndfG3I7-W_l",
"haZvCLBEQ-uH",
"ndfG3I7-W_l",
"REjXBoVvCM",
"ZNF0I8gzSWU",
"bzEuFDADzfa",
"bzEuFDADzfa",
"ZNF0I8gzSWU",
"ZNF0I8gzSWU",
"REjXBoVvCM",
"REjXBoVvCM",
"ndfG3I7-W_l",
"nips_2022_kHeotl7q9dU",
"nips_2022_kHeotl7q9dU",
"nips_2022_kHeotl7q9dU",
"nips_2022_kHeotl7q9dU",
"nips_2022_kHeotl7q9dU",
"nips_2022_kHeotl7q9dU",
"nips_2022_kHeotl7q9dU"
] |
nips_2022_Ijq1_a6DESm | On the consistent estimation of optimal Receiver Operating Characteristic (ROC) curve | Under a standard binary classification setting with possible model misspecification, we study the problem of estimating general Receiver Operating Characteristic (ROC) curve, which is an arbitrary set of false positive rate (FPR) and true positive rate (TPR) pairs. We formally introduce the notion of \textit{optimal ROC curve} over a general model space. It is argued that any ROC curve estimation methods implemented over the given model space should target the optimal ROC curve over that space. Three popular ROC curve estimation methods are then analyzed at the population level (i.e., when there are infinite number of samples) under both correct and incorrect model specification. Based on our analysis, they are all consistent when the surrogate loss function satisfies certain conditions and the given model space includes all measurable classifiers. Interestingly, some of these conditions are similar to those that are required to ensure classification consistency. When the model space is incorrectly specified, however, we show that only one method leads to consistent estimation of the ROC curve over the chosen model space. We present some numerical results to demonstrate the effects of model misspecification on the performance of various methods in terms of their ROC curve estimates. | Accept | I have read all the reviews and discussions carefully.
The reviewers all praised the novelty and the significance of the work. The major complaint is that the proofs are too long to be carefully checked. The authors have appropriately addressed some comments from the reviewers. Given the unanimous support, I have decided to recommend the acceptance of the paper. | train | [
"monGsw5zwH1",
"lQPEcSgSlEG",
"jOdcwCNZhsr",
"59-wYhhAtLw",
"p2dmuSFXNB5",
"vWtpCRRHaFZ",
"BfE14_ijNA1",
"R6M8LBB-N8h",
"lYKkzR2BdIY",
"qMhKp1UYNXy",
"OluPV9r9SBF",
"eAT3qJXwUG"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your further suggestions. \nWe have made some additional edits to the article\nbased on your suggestions. In particular, \n- We have rewritten the statement on line 86. \n- We will expand the real data examples in the main file if our paper is accepted. ",
" I thank the authors for their response. It is reassuring to know that cross-entropy is included in their result.\n\nWith that said, I choose to keep my score at a 6. I would not mind seeing the paper accepted, but I don't feel comfortable taking a strong stance in favor of acceptance because it is hard to have very high confidence in the correctness given the 59 pages of proofs, far more than is reasonable to expect a reviewer to double-check in such a short time frame with several other papers to review at the same time. \n\nI realize that NeurIPS is very prestigious and an acceptance is highly desirable, so I understand the authors' choice to submit to NeurIPS. However, I still feel it might be better for this work to go through the review process at a journal, where reviewers have more time to double-check the proofs more thoroughly. ",
" Thanks to the authors for your thorough response. The authors have largely addressed my concerns in their reply and revision, so I am raising my overall score from a 4 (borderline reject) to a 7 (accept).\n\nA few remaining things to note:\n - If I'm understanding correctly, the satement on line 86 \"our definition is quite general and does not make any assumptions on the data generating distribution\" is incorrect, and that it should state that the definition makes weaker assumptions than prior work (thanks for clarifying these assumptions vs those made in previous work in your reply).\n- The experiments on real-world data are not very comprehensive as only one dataset is considered, but they do support the theoretical results. It would be nice to see a few more datasets included in the results for Figure S3.\n- It's somewhat cumbersome that the experiment results are still in the supplementary material only, but I understand the space limit, and the key takeaways are conveyed in the text of the main paper.",
" Hi authors and reviewers,\n\nThe discussion phase has begun. Please read the other reviews and the author's response (if the authors choose to submit one) and start discussing them with the other reviewers, the authors, and myself.\n\nNote that by default, the authors can see the discussions posted by the reviewers (and vice versa). Please use the \"Readers\" field to adjust the audience of your post if so wished.\n\n*Our goal is to contribute to the discussion to reach a consensus on each paper*. There is only one week for the discussion (until **August 9**). So please do not wait and start the discussion immediately. Thank you very much.\n\nBest,\\\nThe AC",
" Thanks for your thorough review and valuable comments.\nFor your questions on proof techniques, \nour responses are summarized below.\n\n- The proof techniques in existing literature rely heavily on \nthe assumption that $\\eta(X)=\\mathbb P(Y=1\\mid X)$ has a continuous distribution.\nBy contrast, \nwe only assume that the distribution of $X$ is continuous, which makes \nthe old proof techniques no longer applicable in this more general setting. \nFor example, under the old assumption that $\\eta(X)=\\mathbb P(Y=1\\mid X)$\n has continuous distribution,\n we do not need to address the case that $\\mathbb P(\\eta(X)=c) > 0$\n for some $c$.\n But this could happen under the less stringent assumption that $X$\n has continuous distribution. \n The new proof techniques are designed partily to address these challenges. \n- As an example, consider a case where the two conditional distributions of $X$ given $Y=1$ and $Y=-1$\n have different support, that is, there exists a region $D$ such that $f(x|Y=1)/f(x|Y=-1) = 0$ or \n $\\infty$ for $x\\in D$. In this case, $\\eta(X)$ is no longer continuous \n but the distribution of $X$ can still be continuous. \n This situation can happen when the two classes are separable. \n\nFollowing your suggestion, \nwe have also added some potential limitation of our work and gave\nsome future directions to explore further. ",
" Thanks for your valuable comments and suggestions. \nBelow are our responses.\n\n- Our cutoff method analysis can cover cross entropy loss. \nIndeed, logistic regression corresponds to the choice of \nsurrogate loss $V(Yf(x))=\\log(1+\\exp(-Yf(X)))$ in our setup.\nTo see this more clearly, we write the probability \nof $Y=1$ given feature $X=x$ as \n$\\mathbb P(Y=1\\mid X=x)=\\eta(x)=1/(1+\\exp(-f(x)))$, \nwhere $f(x)$ is a discriminant function (e.g., a linear function).\nThen we have \n$V(f(X))=\\log(1+\\exp(-f(X)))=-\\log(\\eta(X))$ if $Y=1$\nand \n$V(-f(X))=\\log(1+\\exp(f(X)))=-\\log(1-\\eta(X))$ if $Y=-1$.\nMaybe the confusion is caused by the notations in that in our setup we use\nnotation $f(x)$ to refer to a discriminant function in\n$V(Yf(x))$, while in your understanding $f(x)$ is the estimated probability which we will use $\\eta(x)$ instead in our setup.\n- Thanks to the suggestions on notation of indicator function, \nwe have added a description on line 157. \n- We have added the suggested literature and fixed the typos in the article. ",
" Thanks for your valuable comments and suggestions. \nWe summarize our responses below: \n- As you have suggested, we can impose additional assumption that $\\eta(X)$ \nhas continuous distribution, which will avoid the discussion of some scenarios we considered \nin our paper. \nWe choose to make our results as general as possible by only assuming that the marginal distribution of $X$ is continuous. \nThis allows our result to be applicable to some important scenarios. \nFor example, if the conditional distributions of $X$ given $Y=1$ and $Y=-1$ have different supports,\nthen $\\eta(X)$ is no longer continuous but the distribution of $X$ can still be continuous. \nThis could happen when the two class is perfectly separable. \n\n- Thanks for your suggestions on including some real data analysis.\nWe have added one real data example and some further discussions on the results in Section 4.",
" Thanks for your valuable comments and suggestions. Following your suggestions,\nwe have made some changes to our submission, which we feel greatly improved the presentation. \nMain changes include: \n- We added some high-level discussions on the practical issues of using three common ROC generating procedures in the Appendix. We will include them in the main file if our paper gets accepted.\n- Following your suggestions, we moved some theoretical results in Section 2 and Section 3 to the Appendix. \n- We carefully proofread our paper, and added missing descriptions to some of the notations and terms as you pointed out in your report. \nWe also included more detailed explanations in the Questions/Suggestions section of our rebuttal.\n- We added a real data analysis example in Section 4. We also added some further discussions on the implications of \nour simulations studies and real data examples in the article. \n\nFor your questions and suggestions,\nwe summarize our response and changes to the main file as follows.\n\n### Questions\nThe basic assumption we use for data distribution is that the marginal distribution $X$, denoted as $\\mathbb P_X$, is continuous.\nThis is used for all our results in Neyman-Pearson classification (section 3.1)\nand Weighted method (section 3.2).\nThe reason why we impose assumption that \n$\\eta(X)$ is a continuous random variable in Theorem 4 and Theorem 5 of section 3.3 is that,\nthe cutoff method will require stronger assumption on the distribution of $X$ in order to target the optimal ROC curve.\nThis stronger assumption also reveals that cutoff method is a less favorable method in general applications.\nTo make it more clear, \nwe have revised this part to avoid confusion. \n\nTo see why our continuity assumption on the marginal distribution $X$ is weaker than \nthe assumption that $\\eta(X)$ is continuous in literature, \nwe could look at the case that the conditional distribution of $X$ given $Y=1$ has \ndifferent support with the conditional distribution of $X$ given $Y=-1$, \ni.e. there exists region $D$ such that $f(x|Y=1)/f(x|Y=-1) = 0$ or $\\infty$.\nIn this case, $\\eta(X)$ is no longer continuous but the distribution \nof $X$ can still be continuous.\nIn practice, the situation that these two conditional \ndistributions of $X$ have different supports could happen. \nIn view of this, our results under weaker assumption are more general and applicable to \na broader class of problems. \n\n### Suggestions\nThanks to your valuable suggestions on improving our paper. Following your suggestions,\nwe have made the following changes to our paper:\n1. We have added a real data example in the main paper. Due to space limit, we put some figures in the Appendix. \n We will include them in the main text if the paper is accepted for publication. \n2. We have added more discussions on our experimental results, practical recommedations (in the Appendix) and future work (in the Discussion section). \n3. We have rewritten parts of the introduction with more clear motivations.\n\nWe have also added definitions or descriptions for some of the terms and symbols you pointed out \nthat are undefined. These include \n- \"Model mis-specification'' means that the optimal classifier is not included in the set of classifiers used for a method. \n For example, if the optimal classifier has a quadratic decision boundary, then all classifiers with a linear decision boundary is considered to be mis-specified. (line 188)\n- \"at the population level'' refers to the situation when the number of samples goes to infinity. (line 34)\n- $\\mathbb I(\\cdot)$ denotes the indicator function. (line 157)\n- $p^+=\\mathbb P(Y=1)$ and $p^-=\\mathbb P(Y=-1)$ are probabilties of observing positive and negative class labels, respectively. (line 108)\n- $\\text{cl}(A)$ denotes the closure of set $A$. (line 215)\n\nMoreover, we also included a scatterplot showing the $2$-D \ndata and corresponding labels for our simulations in the Appendix.\n\n### Limitations\nWe have added some high-level discussions on the practical issues of using the three common \nmethod for estimating ROC curves in the Appendix.\nThese discussions are not only supported by our results under the \nassumption that the model space includes all possible classifiers \n(e.g. Corollary 1, Corollary S2 and Theorem 4),\nbut also based on our results under the assumption that the model space is general and \nmay be mis-specified (e.g. Proposition 1, Theorem S1 and Theorem S2).",
" The authors present a more general notion of \"optimal ROC curve\" for binary classification. They show that their definition relies on weaker assumptions than predecessors when considering the set of all possible classifiers, and they extend the definition for general sets of classifiers. They also show sufficient conditions for three common ROC generating procedures to be consistent, under this definition of optimal ROC, when the model is correctly specified. Results on simulated data agree with the theory. Overall, I think this paper has a solid core with some weaknesses in the presentation and analysis. Some of the weaknesses are easy to address; if they are addressed, I would improve my score.\n\nEDIT: the authors have largely addressed my concerns, so I am changing my overall score from 4 (borderline reject) to 7 (accept).\n\nStrengths:\n\nI am not deeply familiar with related work, but based on the authors' presentation of it, the work contains original, significant contributions, particularly in section 3. The technical portions are thorough and most sections are clear and easy to read. The problem being addressed is general, fundamental, and broadly relevant in machine learning.\n\n\nWeaknesses:\n\nThe main paper is largely consumed by technical details, and as a result, many higher-level parts (motivation, related work/context, experiment results and analysis, discussion) are not thorough. This makes it difficult to determine the significance and potential impact and limitations of the work. \n\nThe originality/significance of section 2 is weak. Most of the results are noted to be only slight generalizations of previous work. The key novelty is the notion of \"optimal ROC\" for general sets of classifiers, and there is hardly any useful analysis in this context compared to the context of all possible classifiers.\n\nThere are some terms and notations throughout the text that are not clearly defined before they are used, and for which I couldn't find definitive information when searching online. These hinder understanding of high-level concepts as well as technical details. This should be easy to fix, though. I've listed some in the Questions/Suggestions section.\n\nThe experimental results are very limited and not thoroughly discussed. There are only results for two simple synthetic datasets, the results are relegated to the supplementary material, and the discussion of the results is limited. This would be a good place to test on some real datasets, where assumptions will not necessarily hold, and discuss the practicality and limitations of the paper's contributions. Questions:\n \nCould you clarify the assumption of continuous data distribution? I notice what appear to be some contradictions. Line 84: \"However, our definition is quite general and does not make any assumptions on the data generating distribution,\" and a similar statement is made on line 150. However, line 104 states \"Throughout, we make the assumption that the marginal distribution $X$, denoted as $\\mathbb{P}_X$, is continuous...\" and in some places you make the similar assumption that $\\eta(X)$ is continuous.\n\nSuggestions:\n\nIf possible, it would be beneficial to move some of the technical details from sections 2 and 3 to the supplementary material so that (assuming the brevity of other sections was a result of page limit) the motivation, context, and discussion can be improved. For example, many points in section 2 are relatively less significant in the context of prior work and may merit only a mention in the main paper with deeper analysis in the supplementary material; similarly, many key takeaways in section 3 can be stated while moving most of the bulky math to the supplementary material. Some figures in the main paper, both to aid understanding of concepts and to show the main results being discussed, might help too. If it were me, the priority would be:\n1. placing the main experiment results in the main paper\n2. making the discussion more thorough, including more detailed interpretation of experiment results, potential impacts, limitations resulting from assumptions, practical uses, future work\n3. more clearly motivating the problem and its significance (why should someone using ROC curves for model evaluation care about these results?) in the introduction\n\nUnderstanding would be greatly aided by making sure that all terms and symbols are defined. Some that would aid my understanding:\n - \"model misspecification\" in this context\n - \"at the population level\"\n - $\\mathbb{I}(...)$ (I assumed this was the indicator function, but see places where I'm not sure that makes sense, such as when there is a random variable with no $\\mathbb{P}$ or $\\mathbb{E}$)\n - $p^+$ and $p^-$\n - $cl(...)$\n\nSince the synthetic data being used is 2D, include a scatterplot showing the data and its labels. Putting it in the supplementary material is fine. The authors state the assumptions that are necessary for their results, but the realistic limitations implied by these are not always clear. In particular, one of the novel elements of the work is that the \"optimal ROC\" definition can be extended to general sets of classifiers; however, many of the key results require the assumption that the set contains all possible classifiers, which has important implications in practice that should be discussed.",
" The problem of estimating ROC curves under possible model misspeciffication is\nstudied. A notion of optimal ROCs under arbitrary model spaces is introduced,\nand three existing methods of estimating ROC curves are studied with respect to\nthis optimal curve. The optimal ROC is shown to coincide with existing\ndefinitions when the model space includes all measurable classifiers. All three\nestimation methods are shown to be consistent when correctly specified, however\nonly one estimation method (constrained/Neyman–Pearson) targets the optimal\ncurve under model misspecification.\n The work is novel and the results broadly applicable as ROC analysis is widely\nused and forms the basis for many methods. The presentation is good, writing\nclear, and the novelty well placed amongst existing literature. No proofs are\npresented in the main text, but the interpretation and general presentation of\nthe theory is excellent.\n\nThe proposed framework is shown to coincide with existing results, and indeed\ngeneralises some results as there are no assumptions on the data generating\ndistribution.\n\nThe theory is lightly explored in simulation studies. As expected, the\nsimulation results align with the theory, but the main results figures are\npresented in the supplementary materials as there is a lack of space in the main\ntext. There are no experiments on real-world data.\n A significant portion of the theory handles seemingly pathological corner cases,\ne.g., non-differentiable parts of the weighted method and pieces dealing\nspecifically with equalities. Would some mild assumptions allow for meaningful\nsimplifications of the theory?\n\nThere are no real-world experiments, how big a difference is there on real world\ndata?\n The limitations of the theory are clear from the stated assumptions. No real\nworld experiments are a limitation that could be briefly discussed.\n",
" The authors present a definition of 'optimal ROC curve' which is applicable under model misspecification, i.e., when the model class does not contain the target function. Three different supervised learning approaches for producing a set of fitted functions generating an ROC curve are analyzed in the limit of infinite training data (i.e. at the \"population level\"): the constrained (Neyman-Pearson) method of maximizing true positive rate subject to an upper bound on the false positive rate, the risk minimization/cost optimization approach in which false positive and false negative errors are weighted differently, and a thresholding (a.k.a. .\"cutoff\" method) in which one real-valued-output discriminant function is learned and an ROC curve is generated by varying the threshold above which one class is predicted instead of the other. If the model class contains all measurable classifiers, then all 3 methods are shown to be consistent in the sense of converging on the optimal ROC curve, although the cutoff method requires a stronger set of assumptions on the loss function used. If there is model misspecification, however, the authors show that only the constrained method is consistent in estimating the optimal ROC curve. Some simulations are presented where the model class is linear but the target function is either linear (to illustrate the case where the model is correctly specified) or quadratic (to illustrate model misspecification). The results are original to the best of my knowledge.\n\nAssuming correctness, I do believe the result that only the constrained method is consistent under model misspecification is a significant result with practical implications (i.e. that the constrained method should be used unless using nonparametric methods).\n\nHowever, I have some concerns about whether the cutoff method result is derived under realistic assumptions. The assumption is made that a discriminant function f(x) is chosen to minimize a weighted expectation of V( Y f(x) ), where V is a surrogate loss function and Y is the binary class (1 or -1). It is not clear to me whether this construction covers the most likely real world scenario for binary classification: logistic regression, where f(x) is chosen to minimize cross-entropy (sometimes described as log likelihood). Here f(x) would range between 0 and 1 and represent the probability of class 1 and we would be minimizing -log(f(x)) if Y=1, -log(1-f(x)) if Y=-1. I might be mistaken, but it is not clear to me whether assuming the loss surrogate V is a function of Y f(x) covers the cross entropy case. \n\nRegarding clarity and presentation, my main concern is that NeurIPs might not be the ideal venue for this work, given that the supplemental contains 59 pages (!!!) of mostly proofs. I wonder whether it would be better to publish this work in a journal where the reader has access to the full proofs. \n\nI also had one very specific presentation complaint. The indicator function notation used (for instance) on line 157 in s(alpha):\n\n ...indicator(eta(x) > c(alpha))\n\n...is not the standard notation for indicator function. Maybe this notation is used more than I realize, but I am much more familiar with indicator being represented as a fancy 1 with 2 vertical lines...\n\nhttps://ewens.caltech.edu/public-code-data/indicator-function-latex \n\n...or simply with I(). I would suggest at least adding a note explaining that the symbol you used represents the indicator function. It took me a while to figure out what that symbol meant, and I had to ask the area chair to confirm for me.\n\nHere are some suggested edits to the writing:\n\nLine 24 empirical risk minimization goes back a lot farther than Elkan 2001 with cost-sensitive learning e.g. Vapnik 1992 empirical risk minimization:\nhttps://proceedings.neurips.cc/paper/1991/file/ff4d5fbbafdf976cfdc032e3bde78de5-Paper.pdf\n...and probably further back than that.\n\nLine 27 another type of methods ->. another type of method\n\n31 the set of classifier -> the set of classifiers\n\nLine 34 what is “population level”? I deduced that it is the N->infinity training data limit, but it might be helpful to explain that.\n\nLine 49 all three methods targets -> all three methods target\n\nLine 58 \"when simple classifers (linear) are used\"….I actually think the result is more broadly applicable...most ML situations will have at least some model misspecification, so suggesting that it's only useful in the linear case is underselling the result …in any practical application the model probably space does not contain the perfect model\n\nLine 63 or mostly focuses - > or mostly focus\nLine 134 set of classifier -> set of classifiers\nLine 185 of special class -> of special classes\nLine 204 the set FPR-TPR pairs -> the set of FPR-TPR pairs\n Does the cutoff method analysis cover the case of cross entropy? Maybe it does and I am not seeing it, but it is not clear to me. \n\nAre you sure you want to publish this in NeurIPs rather than a journal? Wouldn't you rather be able to include the proofs as part of the paper itself? I don't see any concerns regarding negative societal impacts. ",
" This theoretical paper studied three popular ROC curve estimation methods (i.e. Neyman-Pearson classification, weighted method, and cutoff method) and then analyzed at the population level under both correct and incorrect model specifications. In particular, the authors showed that all three methods are consistent in the sense that they all target the optimal ROC curve under 0/1 loss or some surrogate loss under correct model specification. Such results can be regarded as an extension of the classical consistency results from binary classification to ROC estimation methods. The introduced definition of the optimal ROC curve extended the definition in [Scott (2007)] which has much less assumptioin. Pros; \n\n1. It seems that this work is the first-ever-known study to investigate theoretical conditions under which the most popular three methods can recover the optimal ROC optimal curve in the statistical decision framework. \n\n2. The proposed definition of the optimal ROC curve using the Pareto frontier is new and has played a central role in the paper. \n\n3. Some conditions in the literature [Van Trees (2004) and Scott (2007)] are significantly relaxed and extended in the paper. \n\nCons: \n\nThe proofs provided in the supplementary materials are very technical and lengthy. My main question is what is the main novelty in the proof techniques compared to those in the literature. \n\nI have read the authors' reponse and my score remains the same. I have no time to go through the proofs step by step. I am wondering what is the main novelty in the proof techniques in addition to the statements about the new results obtained and the relaxed conditions. For instance, what are the main reasons that you can relax the continuous conditions made in Scott's paper? The authors have not addressed the potential limitation of the work. it would be nice to see what are the remain questions for future work. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"jOdcwCNZhsr",
"vWtpCRRHaFZ",
"R6M8LBB-N8h",
"nips_2022_Ijq1_a6DESm",
"eAT3qJXwUG",
"OluPV9r9SBF",
"qMhKp1UYNXy",
"lYKkzR2BdIY",
"nips_2022_Ijq1_a6DESm",
"nips_2022_Ijq1_a6DESm",
"nips_2022_Ijq1_a6DESm",
"nips_2022_Ijq1_a6DESm"
] |
nips_2022_-o0kPsyzErW | Parallel Tempering With a Variational Reference | Sampling from complex target distributions is a challenging task fundamental to Bayesian inference. Parallel tempering (PT) addresses this problem by constructing a Markov chain on the expanded state space of a sequence of distributions interpolating between the posterior distribution and a fixed reference distribution, which is typically chosen to be the prior. However, in the typical case where the prior and posterior are nearly mutually singular, PT methods are computationally prohibitive. In this work we address this challenge by constructing a generalized annealing path connecting the posterior to an adaptively tuned variational reference. The reference distribution is tuned to minimize the forward (inclusive) KL divergence to the posterior distribution using a simple, gradient-free moment-matching procedure. We show that our adaptive procedure converges to the forward KL minimizer, and that the forward KL divergence serves as a good proxy to a previously developed measure of PT performance. We also show that in the large-data limit in typical Bayesian models, the proposed method improves in performance, while traditional PT deteriorates arbitrarily. Finally, we introduce PT with two references---one fixed, one variational---with a novel split annealing path that ensures stable variational reference adaptation. The paper concludes with experiments that demonstrate the large empirical gains achieved by our method in a wide range of realistic Bayesian inference scenarios. | Accept | The idea of this paper is to tune the reference distribution for parallel tempering to improve efficiency. The key idea is simple: Assume the reference distribution is in the exponential family and use sufficient statistics. Experimental results show that this typically helps in terms of metrics like effective sample size per iteration, though not necessarily in terms of effective samples per second. There are theoretical guarantees which each rely on a long list of assumptions which are deferred to the appendix. While I realize the limitations of space, I echo the reviewers that more discussion of the assumptions should be included in the paper of which should be considered more minor or major. Still this paper proposes a novel approach that is plausibly useful in at least some settings so I recommend acceptance.
A minor point: The font sizes are poorly chosen, to the point of being unreadable if the paper is printed. I had to resort to zooming into individual figures on the computer to reference which was quite tedious. | train | [
"ALRz7TDoAHJ",
"ky8jfzWR5Sa",
"ay2Oaq11ud7",
"GqH9CIBdaBDq",
"8dPtgTLkhKaR",
"_G3SsmlMJj5",
"eZ8VjT890fK",
"w-jp_h5SVD",
"KVVZ4s_ZoOD",
"4hFBLAot6t",
"_bWKWY_SuI7",
"3XQ7q2490rd",
"cLUju83TiI8",
"xl8c3IDRA2l"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the feedback! \n\n> Suppose a particle is stuck in a very deep local mode before it explores the whole domain, the reference distribution is optimized mainly based on samples from this local mode, and hence the reference distribution (obtained from moment matching or similar) at this moment actually limits the exploration instead of facilitating exploration.\n\nPlease note that the variational reference (when used in its stabilized flavour) does not limit exploration compared to standard PT. If standard PT with a fixed reference is able to detect all modes of the distribution, then so will our method. This is because the variational reference is tuned using samples from the target chain where exploration is assisted by **both** the fixed and variational reference. The fixed reference ensures that there is a source of restarts arriving at the target that are unaffected from the optimization of the variational reference. As long as there are restarts occurring from the fixed reference, it is reasonable to expect that the target chain will be mixing (e.g., see Woodard et al. (2009)). These restarts arrive at a linear rate and thus if the variational reference gets trapped in a deep mode, it will have a chance to escape in a **linear** (and not exponential) number of iterations.\n\n\nHowever, your intuition about failures is correct! For example, consider a case with severely separated modes: an equally-weighted mixture of N(1, 1) and N(x, 1) distributions where x is very large. If the prior is N(0,1), it is true that the variational reference might remain in the N(1,1) mode for a long time. However, this example is quite pathological. Standard PT, or any other MCMC method, will fail on this problem too. There is no way to reliably discover modes that are arbitrarily far away.\n\nIn summary, the choice between prior PT methods and ours is clear. For **any** problem, our method is guaranteed to at least perform similarly to standard PT; and on most real problems, our method provides a substantial boost in performance. There are, of course, pathological examples where our method will take a long time to find the optimal variational reference, but contemporary inference algorithms are not expected to perform well on these.",
" I still have some concerns and may need the authors to clarify: suppose a particle is stuck in a very deep local mode before it explores the whole domain, the reference distribution is optimized mainly based on samples from this local mode, and hence the reference distribution (obtained from moment matching or similar) at this moment actually limits the exploration instead of facilitating exploration. \n\nAlthough it seems that in the longtime limit when the reference distribution is fully optimized, it is not an issue. However, it has to spend some exponentially long time to overcome this kind of metastability issue to learn a reasonable well distribution. As such, I don't think this method is promising enough for this challenging task. ",
" Thank you for the continuing discussion!\n\n> The method only seems to works well given that variational inference has found the largest mode (and stays there) to model, which itself is not true. In most cases, your VI may stick at a bad local mode and that makes the algorithm super unstable.\n\nOur paper includes theoretical results to the contrary of both remarks. These results are further supported by empirical evidence in the experiments.\n\nIn Theorem 3.4 of our work (line 181) we prove that the variational reference converges asymptotically to the forward KL minimizer. Please note that in contrast to the reverse KL minimizer, the forward KL minimizer typically covers all of the modes of the distribution.\n\nThe method is additionally supported by theory even in the finite-sample setting. Theorem 3.6 (line 240) guarantees that, in the worst-case scenario, our method has a restart rate at least half of that of standard PT. \n\nFinally, we do not observe any instability in our experiments. Figure 12 in the appendix shows that our method (V--T*--F) reliably results in an accurate approximation to the posterior, which is a substantial improvement beyond standard PT.",
" I maintain my evaluation because I don't think including variational inference is the right path to tackle the challenging issue. The method only seems to works well given that variational inference has found the largest mode (and stays there) to model, which itself is not true. In most cases, your VI may stick at a bad local mode and that makes the algorithm super unstable.",
" Thanks for the clarification!\n\nPlease note that in this work, we require i.i.d. draws from the reference. See line 25 where we require a \"reference distribution … for which i.i.d. sampling is tractable\", and line 138 where we state that \"the variational reference family Q should be...simple enough to enable i.i.d sampling...\". The ability to take i.i.d. draws guarantees that we can sample the reference effectively, which is part of a standard set of sufficient conditions for PT rapid mixing. See, for example, \"Conditions for Rapid Mixing of Parallel and Simulated Tempering on Multimodal Distributions\" by Woodard et al. (2009). Your suggestion of a slightly tempered posterior as the reference is possible to implement but provides no guarantee on the reference samples.\n\nWe also reiterate that regardless of the above, our method improves standard MCMC metrics that are agnostic to PT. We emphasize that Table 3 in our appendix shows that our method can improve the effective sample size (ESS) in the target chain substantially. Figure 14 in our paper as well as Figure 10 of the JRSS-B paper by Syed et al. (2021) further reinforce this point.",
" Q: not possible to simply choose a prior that is close to the posterior distribution\n\nIt is doable. For example, if we propose to simulate the posterior based on Langevin dynamics with temperature $T_1=1$, then setting e.g. $T_2=T_1 + \\text{1e-10}$ for the high-temperature process, then the high-temperature process will asymptotically lead to a prior close to the posterior (two densities have a large overlap), and communication is much improved, but the exploration is not solved at all!!",
" (The first part of the response is contained in the previous comment.)\n\n> “The authors mentioned that adding the fixed reference could help avoid the local optima and get a better estimate of the variational reference. However, the adaptive reference is still single-mode regardless. I wonder whether this would be an issue for harder problems.”\n\nWe emphasize that even a unimodal reference is able to recover modes of the target distribution through tempering, as illustrated in Figure 2 and Figure 12. \n\n> “Parallel tempering seems to be a promising sampling method, but I haven't seen many applications in the context of deep learning. By contrast, many other algorithms (including HMC and AIS) have been used widely in deep learning. I'm curious about the reasons and I wonder if the authors have tried on Bayesian deep learning problems.”\n\nThanks for raising this point! The reason is that only recently has parallel tempering become practical, due to advances such as improvements in parallel architectures combined with work on non-reversible methods. We believe that this methodology is now a fantastic candidate for application to problems in Bayesian deep learning going forward.\n\nWe also note that one can use any MCMC algorithm for “local exploration”—including HMC—as discussed in the background of our paper (Section 2). If HMC is applicable to a given deep learning problem, then we expect our method may improve its performance further. We leave empirical investigation of this to future work.",
" Thank you for reviewing our manuscript, and for your encouraging feedback! Below we provide a summary of the comments from the review, as well as a response to each.\n\n> “This paper proposes to learn the prior distribution adaptively for parallel tempering.”\n\nOther authors sometimes use the terms \"prior\" and \"reference\" interchangeably, something we avoid in our work; we would like to remind the reviewer that the goal of our paper is to tune the reference distribution, not the prior. Indeed the prior is part of the Bayesian statistical model, which we assume is fixed and given in our work. Most past work sets the reference and prior equal to each other (hence being used interchangeably); our work instead tunes the reference distribution carefully to maximize the restart rate. We show that the tuned reference often provides a much higher restart rate than if one sets the reference to the prior. We will endeavour to make sure this distinction is clear in the camera ready.\n\n\n> “Lack of discussions about the assumptions in theoretical analyses. For Propositions 3.1-3.3, the conclusions only hold under some assumptions mentioned in the Appendix. Adding some discussions or giving some intuitive explanations about the settings would be helpful for readers to understand the implications of all these propositions.”\n\nWe completely agree that more discussion of our assumptions could be included in the main text. For the camera-ready, we will be sure to include this. For this response, here is the intuition for Propositions 3.1 and 3.2: we stipulate standard technical conditions sufficient for, e.g., asymptotic consistency of the MLE, a Bernstein-von Mises result for asymptotic normality of the posterior, along with PT-specific assumptions such as efficient local exploration. For Proposition 3.3, we additionally assume that the differences between the posterior mean and MLE, as well as between the inverse Fisher information and scaled posterior variance, are not too large asymptotically.\n\n\n> “All the experiments are done on traditional inference problems with relatively toy models. In this case, I would expect sampling to be \"easy\".”\n\nWe note that of the 15 data sets considered in the paper, 11 are based on real world data. Table 1 in the appendix summarizes all models considered in the paper. \n\nIn terms of scalability, we have indeed considered large models, e.g., d = 10,395 parameters in the Phylogenetic model. This is certainly not a toy problem in the context of accurate characterization of posterior distributions due to multi-modality and/or weak identifiability. In this challenging phylogenetic Bayesian inference problem, our variational PT algorithm increased the median effective sample size by 70% compared to standard PT. This example is highlighted in Table 3.\n\nWe have also considered large datasets, e.g., n = 77,828 in the Vaccines problem and n = 22,548 in the Phylogenetic problem (in addition to two n=100,000 synthetic datasets). Once again, in the context of accurate characterization of Bayesian posterior distributions, these are not small problems.\n\nFinally, as a related addition, we reran the Pollution example (n = 22,548 and d = 275) with 4 million parallel tempering MCMC scans and found that the global communication barrier decreases from 57 (fixed reference, see Table 1 in the appendix) to 7 (variational reference), which corresponds to a seven-fold increase in the asymptotic restart rate. This example is highlighted in Figure 18 of the updated appendix.\n\nWe believe that the extensive simulations on such larger models in the appendix illustrate the utility of our method for difficult Bayesian inference problems.\n\n\n> “The idea of tunning prior is quite natural and similar ideas have been discussed in Bayesian deep learning. So it is quite surprising to me that it has never been used before in parallel tempering.”\n\nWe would like to emphasize that we are not tuning the prior in this paper. The prior and likelihood (i.e., the Bayesian statistical model) are fixed and given in our work. We are tuning the reference distribution, which is one end of the “path of distributions” in parallel tempering (the posterior is the other end). In past work on parallel tempering, the reference is often set equal to the prior, mostly for convenience. The main contribution in this work is a method to tune the reference more intelligently. We show that by tuning the reference, we can obtain substantial gains in performance over just setting the reference to the prior.\n\nAs the reviewer points out, there is work in the deep learning literature that tunes the prior itself, e.g., “Model Selection for Bayesian Autoencoders” by Tran et al. (2021). This involves changing the statistical model, which we do not do in our work.\n\nWe will be sure to clarify this distinction in the camera-ready version.\n\n(Response continued in the next comment.)\n",
" Thank you for your efforts in reviewing our manuscript! Below we provide a summary of the comments from the review, as well as a response to each.\n\n> “...parallel tempering not only cares about communication efficiency (or restart rates) but also focuses on the exploration-exploitation trade-off. The current method seems to solve the issue of communication inefficiency, but the impact of exploration is not clear.” \n\nWe are not entirely certain what this comment means. In parallel tempering, there are only two things that can affect performance of the overall MCMC method: the communication efficiency / restart rate, and the efficiency of the MCMC kernel in each chain (we refer to this kernel as “local exploration”, which is common in the parallel tempering literature, see e.g. Syed et al. 2021). In this paper, we aim to improve communication efficiency. Our method can be used with any local MCMC kernel. We show through our extensive experiments that our method improves communication efficiency substantially. \n\nWe would also like to note that our method also improves standard MCMC metrics (that do not rely on using parallel tempering). For example, Table 3 in the appendix shows that our method can improve the effective sample size (ESS) in the target chain substantially. This can also be seen in Figure 14 in the appendix, as well as in past work (e.g., Figure 10 of the JRSS-B paper by Syed et al. 2021). We will clarify this point in the camera-ready version.\n\n\n> “If we don't know how much exploration is sacrificed, why not just adopt a prior that is close enough to the target distribution?”\n\nIt is not possible to simply choose a prior that is close to the posterior distribution. In practice, we only have access to the posterior density up to a normalization constant; this is why methods like MCMC or variational inference are necessary.\n\n> “The combination with a fixed reference further increases my concerns about this method in exploration, which has to resort to a different prior for exploration.”\n\nWe find that for a vast majority of problems, using only a variational reference alone can perform very well. See Figure 13 for examples. The reason for this improvement in performance is because we optimize the forward KL divergence, which covers the mass of the target distribution.\n\nHowever, we do find that in some cases, the variational reference by itself can become trapped in local optima. For example, this can happen when the posterior is multimodal and the variational reference becomes trapped in one mode. See Figure 2 and Figure 13 for examples of where this occurs.\n\nTo guarantee to a practitioner that our method will never do much worse than the basic choice of setting the reference equal to the prior, we advocate using both references. Indeed, we show in Theorem 3.6 that our method will do at most two times worse (due to additional computation time for using two references) than just using the standard choice of setting the reference equal to the prior. Using two references makes our method more robust in practice. When the variational reference does well, as it usually does, our method has substantial gains. In rare cases where it performs poorly, we fall back to the prior reference.\n",
" (The first part of the response is contained in the previous comment.)\n\n> “Line 21: gold standard methodology for distribution approximation -> I would avoid using phrases as gold standard”\n\nWe agree; we will remove this terminology anywhere it appears throughout the manuscript.\n\n\n> “Should have legend on the figures and use the description to describe the plots and what the reader should focus on.”\n\nWe will add legends where necessary for the camera-ready version. Thank you for pointing this out!",
" Thank you for your efforts in reviewing our manuscript, and for your encouraging feedback! Below we provide a summary of the comments from the review, as well as a response to each.\n\n> “A lot of toy experiments but not real world datasets. It would be interesting to see the method applied in a bigger model and a bigger dataset…” \n\n> “My main question is regarding the scalability of the algorithm in deeper and bigger models…”\n\nWe note that of the 15 data sets considered in the paper, 11 are based on real world data. Table 1 in the appendix summarizes all models considered in the paper. \n\nIn terms of scalability, we have indeed considered large models, e.g., d = 10,395 parameters in the Phylogenetic model. This is certainly not a toy problem in the context of accurate characterization of posterior distributions due to multi-modality and/or weak identifiability. In this challenging phylogenetic Bayesian inference problem, our variational PT algorithm increased the median effective sample size by 70% compared to standard PT. This example is highlighted in Table 3.\n\nWe have also considered large datasets, e.g., n = 77,828 in the Vaccines problem and n = 22,548 in the Phylogenetic problem (in addition to two n=100,000 synthetic datasets). Once again, in the context of accurate characterization of Bayesian posterior distributions, these are not small problems.\n\nFinally, as a related addition, we reran the Pollution example (n = 22,548 and d = 275) with 4 million parallel tempering MCMC scans and found that the global communication barrier decreases from 57 (fixed reference, see Table 1 in the appendix) to 7 (variational reference), which corresponds to a seven-fold increase in the asymptotic restart rate. This example is highlighted in Figure 18 of the updated appendix.\n\nWe believe that the extensive simulations on such larger models in the appendix illustrate the utility of our method for difficult Bayesian inference problems.\n\n\n> “Structure is a bit odd...there are no conclusions and no discussion of limitations, future directions, societal impact.”\n\n> “You need a conclusions paragraph.”\n\nThank you for pointing this out – we will be certain to include a summary, discussion of limitations, future work, and societal impact in the camera-ready version. For now, we will summarize the main details here in this response.\n\nThis paper addressed sampling from a complex target distribution within the parallel tempering framework by constructing a generalized annealing path connecting the posterior to an adaptively tuned variational reference. Experiments in a wide range of realistic Bayesian inference scenarios demonstrate the large empirical gains achieved by our method. In terms of the limitations of our method, in this work we have only considered linear paths from the reference distribution to the target. Future extensions of the work could also consider non-linear paths that allow for more efficient parallel tempering communication (see “Parallel tempering on optimized paths” by Syed et al. 2021). Another inherent limitation of our method—which also occurs in most other standard MCMC methods—is that inference becomes more computationally expensive as the dataset size increases. Fortunately, because any MCMC kernel can be used within each PT chain, recently developed MCMC methods with sub-linear cost in the data size can be readily incorporated into the PT algorithm.\n\nIn this work, we provide a new algorithm for Bayesian inference; this is a foundational algorithmic contribution which has little direct societal impact. Although the use of Bayesian models in data analysis itself does indeed have societal implications, we do not want to speculate about potential downstream uses of our inference method with particular models.\n\n\n> “I would put the figures closer to the references cause for the reader is very difficult to go back and forth in the text and try to find Figure and the corresponding text”\n\nWe agree that the placement is a bit tricky; unfortunately, due to fairly limited space in the paper, we were forced to make hard decisions regarding arrangement of figures. However, we appreciate the comment and will do our best to clarify the arrangement in the camera ready (it will have an additional page, which should help!). \n\n\n> “Write more clear the advantages of the method in the introduction.”\n\nWe agree that the advantages could be made more clear. We will clarify the advantages of our method in the introduction for the camera-ready version. Specifically, we will clarify that even when one is not in the large data setting, our method can provide large empirical gains compared to fixed-reference PT in a wide range of Bayesian inference scenarios. Our methodology is particularly useful in the common case when the target distribution is complex and the prior is far away from the posterior target.\n\n(Response continued in the next comment.)\n",
" The work presents a new Parallel Tempering scheme that adapts a variational reference distribution within a parametric family. They adopt a parameter to minimize the forward KL divergence between the parametric family and the target distribution. They combine a fixed and an adaptive reference that leads to better restart rate performance than the baseline. Strengths\n-Interesting and witty idea combining a fixed and adaptive reference in the scheme.\n-Extensive theoretical analysis of the proposed scheme. The authors provide theoretical guarantees for the performance and convergence of the method.\n-Good presentation of the work.\n\nWeaknesses\n-A lot of toy experiments but not real world datasets. It would be interesting to see the method applied in a bigger model and a bigger dataset (like an image dataset MNIST, CIFAR10). \n-Structure is a bit odd since there are no conclusions and no discussion of limitations, future directions, societal impact. But again this is a theoretical work so societal impact is not applicable in this case. I would like to see the other stuff more in a separate paragraph though.\n My main question is regarding the scalability of the algorithm in deeper and bigger models. It would be interesting to see the method in those applications since the choice of Exponential distribution would make the scheme a good candidate for SG-MCMC like SGLD. But if this is not the scope of this work this is totally understandable and my decision is not conditioned on that.\n\nAlthough I come from the experimental part of Statistical Machine Learning community I like this work and I think it should be accepted in NeurIPS. This work fits the venue and could provide fruitful discussions between Parallel Tempering researchers.\n\n\nSome comments regarding the presentation:\n-I would put the figures closer to the references cause for the reader is very difficult to go back and forth in the text and try to find Figure and the corresponding text.\n-You need a conclusions paragraph.\n-Write more clear the advantages of the method in the introduction.\n-line 21: gold standard methodology for distribution approximation -> I would avoid using phrases as gold standard\n-should have legend on the figures and use the description to describe the plots and what the reader should focus on.\n This is a theoretical work so negative societal impact is not discussed and the limitations are briefly but not clearly discussed in the main text (Subsection 3.5).",
" The authors proposed an improved version of the parallel tempering algorithm to solve the non-scalability issue with respect to the data size. In particular, the authors show that in the large-data limit, the restart rate degrades arbitrarily to 0, which strongly affects the communications between the chains associated with the target distribution and the prior distribution. To tackle that issue, the authors proposed to adopt variational inference based on the exponential family. Theories and experiments show much better restart rates. Pros: \n\nI like the authors' insight on the weakness of parallel tempering with respect to the data size. Given a fixed schedule of parallel tempering, the communication efficiency does raise a concern in large-data limits. A major reason I am suspecting is that as the number of data points increases, the major mode becomes more dominant, which also inspires the authors to use a tunable prior based on variational inference.\n\nCons:\n\n1. I think the proposed method is not the right solution to tackle that issue. As is known that parallel tempering not only cares about communication efficiency (or restart rates) but also focuses on the exploration-exploitation trade-off. The current method seems to solve the issue of communication inefficiency, but the impact of exploration is not clear. If we don't know **how much exploration is sacrificed**, why not just adopt a prior that is close enough to the target distribution? In that way, we can maintain a large enough restart rate via the most vanilla method.\n\n2. The combination with a fixed reference further increases my concerns about this method in exploration, which has to resort to a different prior for exploration.\n\n3. Regarding the theories, I feel this paper is more suitable for a journal review.\n\n* I am familiar with Syed's JRSS-B'21 paper but the proof details of this work are not carefully checked. Before the exploration issue is elegantly solved, I still have concerns about the usefulness of this method to improve parallel tempering in big data. NA",
" This paper proposes to learn the prior distribution adaptively for parallel tempering. In particular, the prior distribution is tuned to optimize a proxy objective (forward KL divergence to the posterior) with a simple gradient-free moment matching procedure. In theory, the variational prior reference proves to outperform fixed reference, but in practice it may get stuck in a single mode, which the authors resolve by mixing the adaptive and fixed reference distributions. Empirically, the proposed method achieves a big gain over existing methods on Bayesian inference tasks. Strengths:\n- The paper is very well written and easy to follow.\n- The introduced algorithm is intuitive and theoretically sound. In the large data limit, the moment-matched reference could achieve the best possible restart rate of 1/2.\n- The authors fixed the collapsed reference by adding fixed reference back in practice, which seems to work well empirically. To be fair, I'm not familiar with the datasets the authors used in the paper, so I don't know how convincing the empirical results are.\n\n\nWeaknesses:\n- Lack of discussions about the assumptions in theoretical analyses. For Propositions 3.1-3.3, the conclusions only hold under some assumptions mentioned in the Appendix. Adding some discussions or giving some intuitive explanations about the settings would be helpful for readers to understand the implications of all these propositions.\n- All the experiments are done on traditional inference problems with relatively toy models. In this case, I would expect sampling to be \"easy\". For models like deep neural networks, the posterior could be very complicated and I don't think the combination of a fixed and an adaptive reference would be enough. See some of my comments in the weaknesses section.\n\nOther questions:\n- The idea of tunning prior is quite natural and similar ideas have been discussed in Bayesian deep learning. So it is quite surprising to me that it has never been used before in parallel tempering. I'm curious about the authors' thoughts on that.\n- The authors mentioned that adding the fixed reference could help avoid the local optima and get a better estimate of the variational reference. However, the adaptive reference is still single-mode regardless. I wonder whether this would be an issue for harder problems.\n- Parallel tempering seems to be a promising sampling method, but I haven't seen many applications in the context of deep learning. By contrast, many other algorithms (including HMC and AIS) have been used widely in deep learning. I'm curious about the reasons and I wonder if the authors have tried on Bayesian deep learning problems.\n The authors discussed the limitations in the paper and I don't see any negative societal impact of this work."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
2
] | [
"ky8jfzWR5Sa",
"ay2Oaq11ud7",
"GqH9CIBdaBDq",
"8dPtgTLkhKaR",
"_G3SsmlMJj5",
"KVVZ4s_ZoOD",
"w-jp_h5SVD",
"xl8c3IDRA2l",
"cLUju83TiI8",
"_bWKWY_SuI7",
"3XQ7q2490rd",
"nips_2022_-o0kPsyzErW",
"nips_2022_-o0kPsyzErW",
"nips_2022_-o0kPsyzErW"
] |
nips_2022_pqCT3L-BU9T | Periodic Graph Transformers for Crystal Material Property Prediction | We consider representation learning on periodic graphs encoding crystal materials. Different from regular graphs, periodic graphs consist of a minimum unit cell repeating itself on a regular lattice in 3D space. How to effectively encode these periodic structures poses unique challenges not present in regular graph representation learning. In addition to being E(3) invariant, periodic graph representations need to be periodic invariant. That is, the learned representations should be invariant to shifts of cell boundaries as they are artificially imposed. Furthermore, the periodic repeating patterns need to be captured explicitly as lattices of different sizes and orientations may correspond to different materials. In this work, we propose a transformer architecture, known as Matformer, for periodic graph representation learning. Our Matformer is designed to be invariant to periodicity and can capture repeating patterns explicitly. In particular, Matformer encodes periodic patterns by efficient use of geometric distances between the same atoms in neighboring cells. Experimental results on multiple common benchmark datasets show that our Matformer outperforms baseline methods consistently. In addition, our results demonstrate the importance of periodic invariance and explicit repeating pattern encoding for crystal representation learning. Our code is publicly available at https://github.com/YKQ98/Matformer. | Accept | This paper had borderline reviews. While the reviewers felt that the method was novel and the presentation very good, they also cited weaknesses such as limited performance gains over baselines and limited novelty. The authors responded in detail to many of the concerns, including with additional experiments, causing several reviewers to increase their scores. We encourage the authors to incorporate aspects of these responses in the final version.
Note: As the AC I will note that I did not find the authors’ very long summary of the discussion helpful in making my decision. I briefly skimmed but did not fully read it. Such a summary/discussion from the authors will obviously be presented through a very biased lens, and so it is much more helpful for me as an AC to directly look at the discussion myself in making a decision, and to ask follow up questions to directly to the reviewers if any clarification is needed. | train | [
"hUJOBrIqvU",
"U57m-Ghf6S",
"X5M2CLzT7wf",
"B0_NN-7E4E",
"4LOZ-a2H6Y",
"K_-LLgDqFz",
"BMzWmTE4LXH",
"DtJPJjPy8GZ",
"k46hHj8Buj",
"3tKGGq9-tj",
"3qO8_-p7BPk4",
"iVrQ65jjN2k",
"B0LwuUcUIFfa",
"q00toC9DCzB",
"Xx6Xa4b4gOcR",
"dWFyVh4j7sP",
"2JA-ADcvjSC0",
"SJXzvF1v-kv",
"ZjL5_XIKE06",
"g9BjlI_U2wz",
"nqGl7xIZBqb",
"ecy56XFGyx0",
"673EHD6Yhlf",
"-AeD0-b2HSJ",
"covwb7ZpzUSH",
"84WLKF3wUbi",
"tJ4luCl2umg",
"4_dMLdjd7SO",
"qWTUs78AOO0"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer ifKM,\n\nThanks again for your valuable comments and suggestions in your initial review, which helps improve our work a lot. Regarding your main concerns on **comparison with Graphormer**, **clarifications in writing**, and **performances when no periodicity or repetitive patterns in the graph**, we have conducted substantial experiments and also revised the paper heavily in our rebuttal on August 1st. Could you please check at your earliest convenience? Thank you so much!\n\n**For comparison with Graphormer**, **we followed your suggestion and conducted experiments comparing with the original Graphormer using their official code.** From the comparison, we notice that the powerful Graphormer can only achieve 0.1389 in terms of test MAE, which is worse than all baseline methods. Performance of our Matformer is 0.0325. We also notice that the training process is unstable for Graphormer. Graphormer fails to integrate periodic invariance. The bad performance and unstable training strongly demonstrate the importance of periodic invariance.\n\n**For some clarifications**, we have **revised our paper heavily to make it clearer to readers.**\n\nRegarding **performances when no periodicity or repetitive patterns in the graph**, we conducted ablation studies in our original paper and **provided further clarifications to make it clearer.**\n\n**We hope that you could reply to our rebuttal and consider raising your score if we have addressed your concerns. Also, please let us know if there are any additional concerns or feedback. Thank you!**\n\nSincerely,\nAuthors",
" Thank you very much for your precious time and your recognition of our work! Your valuable and insightful comments and suggestions help improve our work a lot.\n\n\nSincerely,\n\nAuthors",
" Thank you very much for your precious time! We are glad we addressed all your concerns. Your valuable comments and suggestions help improve our work a lot.\n\nSincerely,\n\nAuthors",
" We genuinely thank you for the recognition of our solid experiments and rebuttals. We sincerely appreciate all the suggestions, the recognition, and the positive rating from you. We provide a point-wise response as follows.\n\n> 1. First, in general, I think it is more appropriate to view each task individually.\n\n- We agree with your thought. \n- We understand you may be concerned with the performance gains for some tasks, like formation energy and band gap for these two datasets (your examples in the last round of comments were for these two tasks).\n- For Formation Energy in The Materials Project and JARVIS, the performance gains beyond ALIGNN are **4.5% and 1.8%** in terms of MAE and **11.9% and 4.0%** in terms of EwT 0.01. These results show that our Matformer is **more powerful** than ALIGNN. Additionally, energy predictions within a small threshold are **meaningful to real-world applications** and to **the material science community.** Beyond this, we can see that when more training samples are available, the performance of our Matformer increases a lot larger compared with ALIGNN, which reveals the **huge potential of Matformer when larger datasets are available.**\n- For band gap (OPT) in JARVIS and band gap in The Materials Project, the performance gains beyond ALIGNN are **3.2% and 3.5%** in terms of MAE and **36.5% and 15.6%** in terms of EwT 0.01. \n- **We respectfully disagree that these kinds of improvements are tunable for these tasks** and we believe the performance gains are significant.\n\n> 2. First, the concepts of EwT and wT are newly introduced in the rebuttal period, which was not mentioned in the initial version. Second, let's say it's ok to introduce new concepts in the revision during rebuttal. EwT and wT, as you mentioned, were introduced in an open challenge in 2020. Prior works you followed like ALIGNN came out after the competition, but it seems that those works didn't really consider EwT in their papers. I'm not a material scientist and you are submitting to a CS conference, I think I can be a bit conservative on the new metrics you introduced in the revision. \n\n- Thank you very much for bringing this up.\n- We **genuinely followed your suggestions in the initial Cons 4, 5 and limitations 2, 3 to include several different evaluation metrics to show the performance gains.** **We deeply appreciated all your suggestions and understand your concerns.** As a result, we added comparisons with ALIGNN in terms of EwT and wT that are different from MAE in the first round of rebuttal.\n- **You are right that previous works didn't really consider EwT in their papers because they were all proposed before the OC20 data and competition (the year 2020) where EwT and wT were introduced, and the only work after the competition is ALIGNN, where EwT was not used either. But this does not mean that EwT and wT are not meaningful in material science.** We believe EwT and wT are meaningful metrics because they can tell a lot about how many predictions can be used practically.\n\n> 3. Given that you said most prior works adopt MAE, I think it should be ok for me to value MAE results more. If you really want to advocate for such new metrics, I think the paper could be better shaped and why these new metrics are significant should be elaborated on. On the other hand, the materials project and jarvis datasets which you indeed evaluated didn't mention such metrics previously. So, it would look a bit inconsistent here.\n\n- Thank you very much for your suggestions. \n- As you may notice, **we did add the significance analysis of EwT metrics in Section 6.2, in lines 306-309 in the revised paper** in the first round of rebuttal.\n\n> 4. Third, the new metrics were introduced in another dataset OC20, but you didn't really try your method on the OC20 dataset.\n\n- Thank you very much for your suggestions. \n- As you may notice, OC20 is a dataset for interactions between adsorbates and catalysts. Although catalysts can have periodic structures on the x and y-axis, adsorbates do not have periodic structures. So **the whole input structure is not periodic. Our Matformer is designed for periodic graphs like crystals and can not be applied directly to OC20.** Hence, OC20 is not used to evaluate performance. \n\n> 5. On the other hand, materials project and jarvis datasets which you indeed evaluated didn't mention such metrics previously. So, it would look a bit inconsistent here.\n\n- Thank you very much for bringing this up. \n- Yes, EwT and wT are not introduced in previous works. Previous works (except for ALIGNN) didn't consider EwT in their papers because they were all proposed before the OC20 data and competition (the year 2020) where EwT was introduced. The only work after the competition is ALIGNN, where EwT is not used either, actually. We added the significance analysis of EwT metrics in Section 6.2, in lines 306-309 in the revised paper in the first round of rebuttal.\n\nThe 2nd part is as follows:",
" > 6. About contributions and differences with GATs\n\n- we believe **model architecture design of Matformer is only one of our several contributions**. For the model architecture, we added a detailed comparison with GATGNN and GAT in the above answer in the second round for question 4 and in our revised paper in the related work section. **There are limited similarities between Matformer and GATs, and the major difference lies in that, GATs do NOT use self-attention [1], while Matformer is based on self-attention.** Self-attention is currently the key component of almost all transformers. If anything is unclear, we are more than glad to make it clearer and will respond as quickly as possible.\n\n- More importantly, **we are the first to formally notify and define two important components for crystal graph construction: periodic invariance and periodic patterns encoding.** Specifically, periodic invariance is of great importance for periodic structures like crystals but is rarely noticed by previous works. Treating atoms as individual nodes (like in molecular graphs) breaks periodic invariance. However, this strategy is widely used when handling periodic structures, even in some powerful methods such as Graphormer. Breaking periodic invariance will result in different representations for the same crystal structure, which could confuse machine learning models and produce suboptimal performance. Besides, when dealing with periodic crystals, no previous work tried to encode periodic patterns. Without encoding periodic patterns, the graph representation only captures a local unit (local structure) but fails to capture how the unit (local structure) expands itself in 3D space. Such an important component is even not noticed, not to mention solved in existing works. We also design several effective and efficient techniques to fulfill these two components. To this end, **we believe the discovery, formal definition, and novel technical solutions of periodic invariance and periodic patterns encoding are of great value, importance, and significance to the community.** \n\n- Additionally, **we notice the comparisons between previous works are not fair and honest. We are the first work to fairly benchmark all these baseline methods on The Materials Project and JARVIS datasets.** **We believe the fair benchmarking of previous methods provided by our work is of great value for the community and can generate immediate impacts for ML on crystals.**\n\nThank you very much for all your suggestions and discussions, which help improve our work a lot! If you have any other concerns, please let us know and we are more than glad to answer.\n\n> Reference\n\n[1] Vaswani, Ashish, et al. \"Attention is all you need.\" Advances in neural information processing systems 30 (2017).",
" Thank very much for adding additional explanation and result. I gave score 6 but in my mind it was between 6 and 5, now I'm more certain this paper worth 6 score. ",
" Thank you very much for the insightful questions and your precious time!\n\nWe conducted comparison experiments with Graphormer and revised the paper according to your valuable suggestions. We also provided clarifications and answers for your questions.\n\nAs the deadline for discussion periodic is approaching, we are looking forward to your reply and are more than glad to answer any other questions.",
" > For The Materials Project, ..., the average performance gain is 7.45%. For the JARVIS dataset, the average performance gain is 5.94%.\n\nFirst, in general, I think it is more appropriate to view each task individually. For instance, if you have x% improvement on imagenet and y% improvement on coco, one typically doesn't say (x+y)/2% improvement on average. But, of couse, if you show the average, I could understand what you mean. Second, the averages are really brought by the improvements on Bulk Moduli prediction task (15.7%, 15.8%). If you drop this one, the average of others would immediately drop below 5%. In many ML tasks, around 3-4% gains are generally tunable. \n\n> we included two metrics EwT and wT\n\nFirst, the concepts of EwT and wT are newly introduced in the rebuttal period, which were not mentioned in the initial version. Second, let's say it's ok to introduce new concepts in the revision during rebuttal. EwT and wT, as you mentioned, were introduced in an open challenge in 2020. Prior works you followed like ALIGNN came out after the competition, but it seems that those works didn't really consider EwT in their papers. I'm not material scientist and you are submitting to a CS conference, I think I can be a bit conservative on the new metrics you introduced in the revision. Given that you said most prior works adopt MAE, I think it should be ok for me to value MAE results more. If you really want to advocate for such new metrics, I think the paper could be better shaped and why these new metrics are significant should be elaborated. Third, the new metrics were introduced in another dataset OC20, but you didn't really try your method on OC20 dataset. On the other hand, materials project and jarvis datasets which you indeed evaluated didn't mention such metrics previously. So, it would look a bit inconsistent here.\n\n> But we kindly disagree that it is our main contribution, as we do NOT randomly add self-loops\n\nFirst, I think NO one would add self-loop randomly. In fact, Technically, previous works [1, 2] and many other works (if you search for related papers) have already introduced self-loops, studies or tested self-loops for different purposes or from different perspectives. Second, I understand you are adding self-loops motivated by a novel real-world application (crystal structure) and Matformer is a novel variant of conventional GAT. So I more see it a novel adapted application of GAT to materials property prediction from an ML perspective. I would be more convinced if the same model can be applied to other types of periodic graphs as well. Meanwhile, the experiments and rebuttals look solid and thus I rate the paper positive.\n\n\n[1] Hamilton, William L. \"Graph representation learning.\" Synthesis Lectures on Artifical Intelligence and Machine Learning 14.3 (2020): 1-159.\n\n[2] Wu, Felix, et al. \"Simplifying graph convolutional networks.\" International conference on machine learning. PMLR, 2019.",
" Thank you very much for the following questions and clarifications.\n\n> 1. I'm not sure which paper you are referring to. But here are some papers in 2021 applying E3NN to materials prediction tasks. Chen, Zhantao, et al. \"Direct prediction of phonon density of states with Euclidean neural networks.\". Batzner, Simon, et al. \"SE (3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials.\" E3NN codebase was built back in 2020. Tasks may be different. But there is no doubt these papers including yours all focus on graph encoding.\n\n- Sorry for our misunderstanding. We thought the referred paper was *Mario Geiger, and Tess Smidt, e3nn: Euclidean Neural Networks, arXiv preprint arXiv:2207.09453 (2022)*, because the title of this paper contains the term *E3NN*. Now we are clear that you were talking about the general E(3) neural networks, and thank you for your clarifications! Previously, we added the *Nequip* paper as you mentioned. This time we also included and discussed the paper *Zhantao Chen et al, Direct prediction of phonon density of states with Euclidean neural networks, Advanced Science 8.12 (2021)* in line 31 in our paper. We did some literature review and believe that we should have included all recent works about e3nn on materials.\n\n- In our first round of responses (answers for Cons 1), we added discussions and comprehensive experiments about *Nequip*, which is the model you mentioned as *Simon Batzner, et al,SE (3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials, arXiv preprint arXiv:2101.03164 (2021)*. As *Nequip* applies E3NN to materials prediction tasks, we reimplemented and retrained it on JARVIS five tasks, and the results were included in In our first round of responses (answers for Cons 1). The performance of *Nequip* is much worse due to the lack of periodic pattern encoding. As we also mentioned, those results can be added in the paper if you suggest so.\n\n> 2. When I mentioned this, I was actually looking at Table 1 and Table 2. MAE, as you explained, is the most widely adopted in this field. For instance, in Table 1, formation energy 0.022 -> 0.021, band gap 0.218 -> 0.211; in Table 2, 0.0331 -> 0.0325 etc. I appreciate that Matformer consistently outperforms other baselines like ALIGNN, but I'm not sure how much such small gaps make sense to materials scientists.\n\nThank you for bringing this up.\n\n- Yes, in terms of MAE, the performance gains of our methods beyond ALIGNN are not as large as those of ALIGNN beyond other baseline methods. In this paper, we tend to design methods that are both powerful and efficient. Our methods are three times faster than ALIGNN, which is very important in practice.\n\n- For The Materials Project, in terms of MAE, the performance gains beyond ALIGNN are 4.5%, 3.2%, 15.7%, 6.4%, and **the average performance gain is 7.45%**. For the JARVIS dataset, the performance gains beyond ALIGNN are 1.8%, 3.5%, 5.4%, 15.8%, 3.2%, and **the average performance gain is 5.94%**. Hence, we believe the performance gains beyond ALIGNN in terms of MAE are still significant. \n\n- To demonstrate the significance to materials scientists, as described in our answers for Cons 4 and Cons 5 and limitations 2 in the first round, we included two metrics EwT and wT. These two metrics can demonstrate how many predictions from models thus can be practically useful. From the comparison results, we can see that our Matformer consistently outperforms ALIGNN with significant margins, in terms of EwT 0.01, EwT 0.02, wT 0.01, and wT 0.02 for all these tasks in The Materials Project and JARVIS. As we mentioned, we have put most of those results in the revised paper.\n\n- Additionally, the performance gains of our Matformer beyond ALIGNN in terms of EwT mainly come from more accurate energy predictions within an absolute error of 0.01 (compared with 0.02). The demonstration on formation energy is particularly shown in the following table, and more results were already provided in Cons. 4 in the last round. This indicates that Matformer generates more accurate predictions when the prefixed error (threshold) is more strict. Compared with JARVIS, the Materials Project has 15422 more training samples. With more training samples available, **the percentage of predicted energies obtained by Matformer within 0.01 increases by 14.69% (55.86%-41.17%)**, ***which is significantly better than ALIGNN***, **revealing the huge potential of Matformer when larger crystal dataset is available**. We believe these results **make sense for material scientists** because of **much more practically useful energy predictions**, and **reliability and huge potentials when larger datasets are available** of Matformer. \n\n | Formation Energy JARVIS | EwT 0.01 | EwT 0.02 | Formation Energy MP | EwT 0.01 | EwT 0.02 |\n |----|----|----|----|----|----|\n |ALIGNN|39.59%|59.64%||49.94%|71.10%|\n |Matformer|41.17%|60.25%||55.86%|75.02%|",
" > 3. Also, it looks like you didn't mention the model sizes (correct me if I'm wrong). If the model sizes are quite different, comparing their numbers is unfair. This often happens to transformer-based models since the size can go high very easily.\n\nThank you very much for your suggestion.\n\n- We added the comparison of model size between Matformer and ALIGNN in the revised paper (Table 3) and as shown below. It can be seen that compared with previous ALIGNN, our ***Matformer actually has the smaller model size and achieves better results consistently, in terms of not only MAE, but also EwT and wT.***\n | Matformer | ALIGNN |\n |----|----|\n |11528236 (11.0 MB) |16164044 (15.4 MB and 40.2% more than Matformer)|\n\n> 4. For the model Architecture. In general, Matformer has not jumped out of the framework of a series GNNs like GATGNN.\n\nWe respectfully disagree with this. \n\n- First of all, we think our major novelty lies in graph construction with periodic invariance and periodic pattern encoding. In terms of space, the formal definition, importance analysis, and technical solutions of these two components occupy 3 pages. **The graph construction in GATGNN fails to integrate periodic patterns, and the constructed graphs are not precise enough**. Regardless of this, even for the network architecture, we believe our Matformer is different from GATGNN.\n\n- (1) **Basic layers and attention operation are different**: Our Matformer belongs to the Transformer framework, which is different from the GAT[1] framework. GAT framework calculates attention weights using the concatenation of transformed source and target node features, followed by a **linear transformation** and softmax. While the Transformer framework calculates attention weights using the **Hadmard Product** between query and key vectors, followed by softmax or other operations. Actually, GAT computes a very limited kind of attention and the ranking of the attention scores is unconditioned on the query node, and the corresponding negative consequences are detailed in GATv2[2]. But the Transformer framework does not have this problem.\n\n- (2) **Softmax constraints the capability of GATGNN of distinguishing nodes with different degrees**: GATGNN uses softmax to aggregate information from neighbors for a given node $i$, but softmax limits the capability of GATGNN to distinguish nodes with different degrees. For example, consider two different inputs for the attention layer. One is that node $i$ which has 3 neighbors with the same pairwise distances d and node type t, the other is node $i$ which has 1 neighbor with pairwise distance d and node type t. The GATGNN will produce the same results for these two cases because all information from these 3 neighbors is the same. But our Matformer can distinguish these two cases by using layernorm and sigmoid instead of softmax, as we mentioned in Section 4.2. We also conducted experiments comparing the operation of softmax and layernorm with sigmoid in Appendix A.6, where we show our methods are more effective. \n\nWe added the discussion with GATGNN in the related work section in the revised paper.\n\n> References\n\n[1] Veličković, Petar, et al. \"Graph Attention Networks.\" International Conference on Learning Representations. 2018.\n\n[2] Brody, Shaked, Uri Alon, and Eran Yahav. \"How Attentive are Graph Attention Networks?.\" International Conference on Learning Representations. 2021.\n\nWe continue the 3rd part as below:",
" > 5. The main contribution is the self-loop.\n\nThe usage of self-loops is indeed the final approach we used to encode periodic patterns without breaking periodic invariance. But we kindly disagree that it is our main contribution, as we do NOT randomly add self-loops. Our technical approaches are well motivated (precise crystal graph construction), problems are well defined and analyzed (periodic invariance and periodic pattern encoding), and technical approaches are provable. As a result, we propose two techniques to solve the periodic invariance, and *propose to add self-loops to solve periodic patterns encoding*.\n\n- Our work is firstly motivated by precise graph construction for periodic crystal structures. To this end, we first proposed to define periodic invariance and periodic pattern encoding for periodic graphs, which are rarely noticed by the community and of great importance for the representation learning of periodic graphs like crystals. We then give detailed analysis of why these two components are important in Section 3. Beyond this, we propose two solutions to preserve periodic invariance with corresponding proofs in Appendix A.2. These two methods can be used safely by the community and future works without worrying about breaking periodic invaraince. We then propose a new, elegant, and efficient way to encode periodic patterns when constructing graphs. Usage of self-loop is the technical approach to encoding periodic patterns while preserving periodic invariance.\n- Also, the proposed Matformer is demonstrated to be **powerful** (outperforms previous SOTA consistently) and **efficient** (nearly 3 times faster in training and inference time). Due to the nature of period graphs, Matformer is very different from transformers for texts, images, and regular graphs. the Matformer architecture is also different from the GATGNN architecture. All the proposed technical solutions in this work are new, effective, and efficient.\n- Additionally, **we notice the comparisons between previous works are not fair and honest.** They compare with each other with different random split seeds and different numbers of training samples. **We are the first work to fairly benchmark all these baseline methods on The Materials Project and JARVIS datasets. We believe the fair benchmarking of previous methods provided by our work is of great value for the community, and can generate immediate impacts for ML on crystals**.\n- Overall, we think the novelty is sufficient due to 1). we discover, formally define, and propose approaches to resolve two vital components for crystal graph construction; 2). we provide a novel, powerful, and efficient crystal learning model; and 3). we are the first work to compare all baselines fairly with exactly the same data splits.\n\n> 6. While I appreciate the consistent improvements, the gains are not as large as the authors implied in previous sections (at least on MAE). I think for a NeurIPS paper, I'm expecting slightly more.\n\nWe respectfully disagree with this.\n\n- We show the large performance gains **in MAE** of our methods beyond ALIGNN as described in the above answer for question 2 (**7.45% and 5.94%**).\n- We show the significant performance gains **in EwT/wT** of our methods beyond ALIGNN as described in the above answer for question 2. (**24.69% in terms of EwT and wT 0.01 for JARVIS, 11.78% in terms of EwT and wT 0.02 for JARVIS, 24.17% in terms of EwT and wT 0.01 for MP, 11.16% in terms of EwT and wT 0.02 for MP**)\n- We illustrate the novelties and contributions of our work in the above answer for question 5.\n\nHope we have addressed your concerns well, and we are looking forward to your reply. Thanks!",
" > We noticed that E3NN is a newly released paper in July 2022\n\nI'm not sure which paper you are referring to. But here are some papers in 2021 applying E3NN to materials prediction tasks.\n\nChen, Zhantao, et al. \"Direct prediction of phonon density of states with Euclidean neural networks.\" Advanced Science 8.12 (2021): 2004214.\n\nBatzner, Simon, et al. \"SE (3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials.\" arXiv preprint arXiv:2101.03164 (2021).\n\n[E3NN](https://e3nn.org/) codebase was built back in 2020.\n\nTasks may be different. But there is no doubt these papers including yours all focus on graph encoding.\n\n> From the comparison numbers, the improvements over others like ALIGNN are not that significant.\n\n1. When I mentioned this, I was actually looking at Table 1 and Table 2. MAE, as you explained, is most widely adopted in this field. For instance, in Table 1, formation energy 0.022 -> 0.021, band gap 0.218 -> 0.211; in Table 2, 0.0331 -> 0.0325 etc. I appreciate that Matformer consistently outperform other baselines like ALIGNN, but I'm not sure how much such small gaps make sense to materials scientists.\n\n2. Also, it looks like you didn't mention the model sizes (correct me if I'm wrong). If the model sizes are quite different, comparing their numbers are unfair. This often happens to transformer-based models since the size can go high very easily.\n\n> comments on the model architecture\n\nIn general, Matformer has not jumped out of the framework of a series GNNs like GATGNN. The main contribution is the self-loop. While I appreciate the consistent improvements, the gains are not as large as the authors implied in previous sections (at least on MAE). I think for a NeurIPS paper, I'm expecting slightly more. \n\n \n\nWith these being said, I appreciate authors' effort in considering my suggestions and providing additional numbers. I'm willing to raise my eval rating a bit.",
" Thank you the author for the comprehensive response to my question. All my concerns have been addressed by the author so that I would like to support the publication of this article.",
" Thanks for your positive feedback on the importance of the task we focus on, the effectiveness of our proposed method. For the concerns and weaknesses, we provide a point-wise response as follows. We also revised the paper accordingly.\n\nFor weaknesses:\n> 1. The motivation of the paper can be further improved: a) Why do we need to construct a graph from the unit-cell coordinates rather than directly using coordinates to conduct representation learning? What's the benefit to borrow the graph structure here? I believe there are a few works for representation learning on 3D positions such as [1], [2] and [3] b) What is the motivation to form self-connecting edges to contain the information of lattice matrix? Why not just concatenate it with the representation of a single unit cell?\n\nWe respectfully disagree with this point.\n\n- For real-world prediction tasks, we usually need to satisfy invariance or equivariance. The concept of invairance lies in a scenario where when we rotate an object (like molecule), the coordinates for each atom in the molecule would change, but the molecule is still the same one! \nML models should not treat the same molecule before and after rotation as different ones. Thus, we need to retain invariance in ML models. That’s why we introduce Unit Cell E(3) Invariance first at the beginning of Sec 3.1 in the paper. Directly taking coordinates may satisfy E(3) equivariance with delicate equivariant operations, but would not satisfy E(3) Invariance. For sure, it would not satisfy periodic invariance. Overall, **for (a), the reason that we need to construct a graph from the unit-cell coordinates and lattices rather than directly using coordinates to conduct representation learning is to ensure the periodic invariance property of periodic graphs.** Specifically, as shown in Figure. 2 in the main paper, we can get totally different 3D point clouds for the same periodic structure by shifting the artificial periodic boundaries. Because of this, traditional representation learning methods for molecular graphs and finite point clouds can not be used for crystals with periodic structures directly. **Additionally, We also show that when coordinates are directly used (Graphormer), the periodic invariance is broken and performance drops significantly (OCgraph in Table. 5).**\n\n- For (b), Firstly, **if doing so, it is like we break the infinite periodic structure into a local unit cell structure and repeating patterns, then learn two representations and pass messages separately.** In this way, when performing message passing for every single atom in the unit cell, **the self repeating patterns in the infinite 3D space of that atom itself is not considered** (because the self-connecting edges are not modeled in the graph). Instead, **if the periodic patterns are encoded in self-connecting edges, when performing message passing, the self repeating patterns for a given atom is fully captured.** Secondly, if we concatenate the lattice information with the representation of a single unit cell, it means we need a separate encoder for learning the lattice. Instaed, we can include the periodic patterns by simply adding self-connecting edges and no extra encoder is needed, which is more elegant.\n\nWe continue at the 2nd part as below:",
" > 2. It seems that section 3.1 and 3.2 can be moved to section 2 or they can just serve as a single section, since it seems that they are the motivation/theoretical ground of the proposed model not the model itself.\n\nThank you for your suggestion. \n\n- Firstly, in the original version, we formatted sections 3.1 and 3.2 as separate sections rather than moving them to section 2. This was due to periodic invariance and periodic repeating patterns for periodic graph learning are often ignored and not addressed by previous works for periodic graphs. Specifically, periodic invariance is of great importance for periodic structures like crystals, but is rarely noticed by previous works. Treating atoms as individual nodes with corresponding 3D coordinates (like in molecular graphs) breaks periodic invariance. However, this strategy is widely used when handling periodic structures, even in some powerful methods such as Graphormer. Breaking periodic invariance will result in different representations for the same crystal structure, which could confuse machine learning models and produce suboptimal performance. Besides, when dealing with periodic crystals, no previous work tried to encode periodic patterns. Without encoding periodic patterns, the graph representation only captures a local structure, but fails to capture how the local structure expands itself in 3D space. Such an important component is even not noticed, not to mention solved in existing works. We propose to formally define them and conducted analysis on them.\n\n- However, after careful consideration, we do believe your suggestion is helpful for clarity.\nThus, we followed your suggestion and formatted sections 3.1 and 3.2 to the new separate section 3 titled ‘Periodic invariance and periodic pattern encoding for crystals’ in the revised paper. \n\n\n> 3. Line 45, the paragraph is titled as \"Crystal property prediction\" but it seems that the whole paragraph is about the definition of periodic lattice. The title might be modified accordingly.\n\nThank you for your suggestion.\n\nWe followed your suggestion and edited the name from \"Crystal property prediction\" to \"Crystal property prediction and crystal structures\".\n\n> 4. It seems that E(3) invariance has been mentioned even in the abstract but it is formally defined in section 3.1.\n\nThank you for bringing this question up. \n\nWe followed your suggestion and added the description for E(3) invariance in the first paragraph in Section 1.\n\n> 5. Line 87, it mentions that \"the structure f a cell remains the same when ...\". Does the structure here mean the the output of the \"f\" function mentioned in the Definition 1 above?\n\nThank you for bringing this up.\n\nThe structure of a cell means the geometric 3D structure (conformation) of the unit cell. It is like the geometric structure of a molecule will not change when you rotate it in the 3D space. We also edited this line in section 3.1 of the revised paper to make it clearer.\n\n> 6. Line 90, It says \"periodic invariance is also shown necessary for generating valid crystal representation\". Why?\n\nThank you for bringing this question up. \n\n- Firstly, as we mentioned in the original paper after this sentence, we show that **ignoring periodic invariance will end up with different representations for the same crystal structure, which will definitely generate unsatisfactory prediction results.**\n\n- For valid crystal representations, we need to map the same crystal to the same representation. As shown in Figure 2 in the original paper, we can actually have various unit cell structures for the same crystal by shifting the artificial periodic boundaries. **For these various unit cell structures of the same crystal, we need to map them to a unique representation. Hence, periodic invariance is also shown necessary for generating valid crystal representation.**\n\nIf you have any follow up questions, we are more than glad to answer.\n\nWe continue at the 3rd part as below:",
" > 7. In table 2 the paper compares the time complexity of the proposed method with ALIGNN. It would be great if some complexity analysis is provided here.\n\nThank you for bringing this question up. \n\n- Firstly, following your suggestion, we **provided the complexity analysis of introducing angular information** in crystal graphs in section 6.3 in the paper and Appendix A.6. Assume that we have *n* atoms in a single cell and thus *n* nodes in the original graph. Also assume that every node has at least 12 neighbors (following the graph construction method of Matformer) and there are no self-connecting edges. This will result in a graph G=(V,E) where |V| = n and |E| = 12/2 n = 6n. When converting graph G into L(G), every edge is treated as a node in the line graph. So we have 6n nodes in the line graph. Every edge in the original graph is connecting 2 nodes and every node has (12 - 1) other edges, resulting in 22 neighboring edges for each edge. So we have 22*6n/2=66n edges in the converted line graph. **Compared with the original graph with |V| = n and |E| = 6n, the converted line graph with angle information is super large with the node number of 6n and edge number of 66n, which would induce high computational cost.**\n\n- Moreover, different from molecular graphs, atom number *n* can be super large for crystal graphs. To be concrete, the **maximum number of atoms in a single cell for the Materials Project is 296**. Also, we **provide the atom number statistics of the two crystal datasets** we used in our experiments as follows. As shown in the following table, crystal structures have more atoms than regular molecular graphs. Specifically, there are **more than 10k samples in The Materials Project with more than 50 atoms in a single cell**. Thus, larger complexity will be introduced when adding angular information compared with molecular graphs. \n\n | Dataset | Mean atom numbers in a single cell | Max atom numbers in a single cell | Number of crystals with > 50 atoms in a single cel | Number of crystals with > 100 atoms in a single cell | \n |----|----|----|----|----|\n |JARVIS|10.1|140|308|3|\n |The Materials Project|29.9|296|11911|1866|\n\n- Beyond this, following your suggestion, we **conducted ablation studies** to show the **high complexity of introducing angular information** to our Matformer in Sec 6.3 and in the table below. We introduced angular information using line graphs following ALIGNN, and added two edge layers of the original Matformer to deal with extra angular information. We use cosine plus RBF (as done by ALIGNN) and SBF kernels to encode angular information. The corresponding running times and final test MAE are shown in the following Table. It can be seen that introducing angular information largely increases the training time by nearly three times, and the performance gain is neglectable. It also shows that our proposed Matformer has great power for periodic graph learning, and is much more efficient. \n\n |JARVIS Formation Energy|MAE|Time Per Epoch|Total Training Time|\n |----|----|----|----|\n |Matformer|0.0325|64 s|8.9 h|\n |Matformer + Angle SBF|0.0332|173 s|23.8 h|\n |Matformer + Angle RBF|0.0325|165 s|22.9 h|\n\n- However, we do believe introducing angular information in periodic graphs properly (without breaking periodic invariance) and efficiently (without introducing large complexity) is challenging yet promising, and we aim to tackle it in future research. \n\n\n> 8. Some relevant works [4, 5] regarding periodic graphs generation can be discussed in the related works section.\n\nThank you for your suggestion. Following your suggestion, we added the discussion of CDVAE in the introduction section and Deep Generative Model for Periodic Graphs in the related work section in the revised paper. \n\n> 9. The code is not published for now to reproduce the results.\n\nThank you for your suggestion and we attached our code in the supplementary materials.\n\nFor limitations:\n> 1. The limitations of the model are not mentioned, the negative societal impacts of your work are marked as N/A.\n\nThank you for letting us know your concerns.\n\nFollowing your suggestion, we added the limitations and the negative societal impacts in the final section and accordingly updated the Checklist in the revised paper.\n\n\nHope we have addressed your concerns well, and we are looking forward to your reply. Thanks!",
" Thanks for your positive feedback on the clarity of writing, the importance of our focused problem, and experiments including ablation study and evaluations on multiple datasets. For the concerns and weaknesses, we provide a point-wise response as follows. We also revised the paper accordingly.\n\nFor Cons:\n> 1. Essentially, the paper extends the existing graph-based methods for crystal property prediction. However, another direction is Euclidean neural networks. These models are not compared in this paper, such as E3NN.\n\nThank you for bringing this question up. We noticed that E3NN is a newly released paper in July 2022, which is after the Neurips submission deadline. \n\n- Firstly, following your suggestion, we **revised the paper and included the discussion** for the comparison with Euclidean neural nets including E3NN and Nequip in encoding crystal structures in introduction & related work sections.\n\n- Additionally, we also **reimplemented Nequip** (a Euclidean neural network and E(3) Equivariant Neural Network) based on the official code and conducted a comparison between Nequip and Matformer on the five tasks of JARVIS. The results are shown in the below table. The significant performance gain reveals the importance of the encoding of periodic patterns and the modeling capacity of the Matformer layer. `We can add those results in the paper if you suggest so.` And we will add experimental results of E3NN for comparison in the future.\n\n| JARVIS MAE | Formation energy | band gap (OPT) | Total Energy | Ehull | band gap (MBJ) |\n|----|----|----|----|----|----|\n|Nequip|0.048|0.26|0.048|0.17|0.50|\n|Matformer|0.0325|0.137|0.035|0.064|0.30|\n\n> 2. The authors decide to drop the angle information due to high complexity. But ablation study should be provided to support this claim.\n\nThank you very much for the suggestion. \n\n- Firstly, following your suggestion, we **provided the complexity analysis** of introducing angular information in crystal graphs in section 6.3 and Appendix A.6. Assume that we have *n* atoms in a single cell and thus *n* nodes in the original graph. Also assume that every node has at least 12 neighbors (following the graph construction method of Matformer) and there are no self-connecting edges. This will result in a graph G=(V,E) where |V| = n and |E| = 12/2 n = 6n. When converting graph G into L(G), every edge is treated as a node in the line graph. So we have 6n nodes in the line graph. Every edge in the original graph is connecting 2 nodes and every node has (12 - 1) other edges, resulting in 22 neighboring edges for each edge. So we have 22*6n/2=66n edges in the converted line graph. **Compared with the original graph with |V| = n and |E| = 6n, the converted line graph with angle information is super large with the node number of 6n and edge number of 66n, which would induce high computational cost.**\n\n- Moreover, different from molecular graphs, **atom number *n* can be super large for crystal graphs.** To be concrete, the maximum number of atoms in a single cell for the Materials Project is 296. Also, we provide the statistics of atom numbers of the two crystal datasets in the following table. Crystal structures have more atoms than regular molecular graphs. Specifically, there are more than 10k samples in The Materials Project with more than 50 atoms in a single cell. Thus, larger complexity will be introduced when adding angular information compared with molecular graphs. \n\n| Dataset | Mean atom numbers in a cell | Max atom numbers in a cell | Number of crystals with > 50 atoms in a cel | Number of crystals with > 100 atoms in a cell | \n|----|----|----|----|----|\n|JARVIS|10.1|140|308|3|\n|MP|29.9|296|11911|1866|\n\n- Beyond this, following your suggestion, we **conducted ablation studies** to show the **high complexity of introducing angular information** to our Matformer in Sec 6.3 and as shown in the table below. We introduced angular information using line graphs following ALIGNN, and added Matformer layers to deal with extra angular information. We use cosine plus RBF (as done by ALIGNN) and SBF kernels to encode angular information. The corresponding running times and final test MAE are shown in the following Table. It can be seen that introducing angular information largely increases the training time by nearly three times, and the performance gain is neglectable. It also shows that our proposed Matformer has great power for periodic graph learning, and is much more efficient. \n\n|JARVIS Formation Energy|MAE|Time Per Epoch|Total Training Time|\n|----|----|----|----|\n|Matformer|0.0325|64 s|8.9 h|\n|Matformer + Angle SBF|0.0332|173 s|23.8 h|\n|Matformer + Angle RBF|0.0325|165 s|22.9 h|\n\n- However, we do believe introducing angular information in periodic graphs properly (without breaking periodic invariance) and efficiently (without introducing large complexity) is challenging yet promising, and we aim to tackle it in future research. \n\nWe continue at the 3rd part as below:\n",
" > 3. As I mentioned, the ablation study is critical for performance and operations. It would be better to expand the ablation study section and move the dataset descriptions to appendix.\n\nThank you for your suggestion. We agree that the ablation study is critical for performance and operations. we moved the dataset descriptions to Appendix. We also added the ablation studies about angular information in Section 6.3 and Appendix A.6, as discussed above.\n\n> 4. From the comparison numbers, the improvements over others like ALIGNN are not that significant.\n\nWe respectfully disagree with this. \n\n`Firstly,` compared with previous SOTA ALIGNN, our Matformer consistently outperforms ALIGNN on two widely used material dataset for 9 different tasks in MAE, and is three times faster in total training time and nearly three times faster in reference time.\n\n`We also added two new metrics`, including energy within threshold (EwT) and prediction within threshold (wT) for evaluation. These metrics are from paper [1], and are **well recognized by the community.** This is because they provide a different perspective from MAE (or RMSE), and are meaningful in practice. For example, EwT measures the percentage of estimated energies that are likely to be practically useful when the absolute error is within a certain threshold, and this threshold could be 0.01 or 0.02.\nWe then conducted comparison with ALIGNN in terms of EwT and wT to better demonstrate the performance gains. \n\n**For all the tasks in JARVIS and formation energy and bandgap tasks in the Materials Project, we use the official prediction results for test sets from ALIGNN to calculate corresponding results. The bulk moduli and shear moduli are not included because no official results from ALIGNN are released.** We use EwT 0.02 to denote energy prediction error within threshold of 0.02 and EwT 0.01 to denote energy prediction error within threshold of 0.01 for tasks of formation energy and total energy. We use wT 0.02 to denote prediction error within threshold of 0.02 and wT 0.01 to denote prediction error within threshold of 0.01 for other tasks. The results are shown in the following tables.\n\n1). Formation Energy for JARVIS and the Materials Project\n\n| JARVIS | EwT 0.01 | EwT 0.02 | MP | EwT 0.01 | EwT 0.02 |\n|----|----|----|----|----|----|\n|ALIGNN|39.59%|59.64%||49.94%|71.10%|\n|Matformer|41.17%|60.25%||55.86%|75.02%|\n\nIt can be seen from this table that Matformer has more accurate predictions for crystals. Interestingly, the performance gains of our Matformer beyond ALIGNN in terms of EwT mainly **come from more accurate energy predictions within an absolute error of 0.01.** This indicates that Matformer generates more accurate predictions when the prefixed error (threshold) is more strict. Compared with JARVIS, the Materials Project has 15422 more training samples. With more training samples available, the percentage of predicted energies obtained by Matformer within 0.01 increases by 14.69%, which is significantly better than ALIGNN, revealing **the huge potential of Matformer when larger crystal datasets are available.** We also revised Section 6.2 of the paper and added experiments and discussions on the new metric EwT.\n\n2). Band Gap (MBJ) and Band Gap (OPT) for JARVIS \n\n| Band Gap (MBJ) | wT 0.01 | wT 0.02 | Band Gap (OPT) | wT 0.01 | wT 0.02 |\n|----|----|----|----|----|----|\n|ALIGNN|18.10%|31.87%||48.73%|61.79%|\n|Matformer|31.70%|43.64%||56.31%|64.02%|\n\nIt can be seen from the table that there are 13.6% gain in terms of wT 0.01 for Band Gap (MBJ) and 7.58% gain in terms of wT 0.01 for Band Gap (OPT), which are very significant.\n\n3). Ehull and Total Energy for JARVIS\n\n| Ehull | wT 0.01 | wT 0.02 | Total Energy | EwT 0.01 | EwT 0.02 |\n|----|----|----|----|----|----|\n|ALIGNN|22.92%|39.52%||35.09%|55.20%|\n|Matformer|28.37%|44.81%||36.84%|57.36%|\n\n4). Band Gap for The Materials Project\n\n| MP-BandGap | wT 0.01 | wT 0.02 | \n|----|----|----|\n|ALIGNN|19.79%|30.15%|\n|Matformer|27.01%|35.22%|\n\n`Overall,` **Matformer outperforms ALIGNN in terms of MAE, EwT 0.01, EwT 0.02, wT 0.01 and wT 0.02 for all these seven tasks in JARVIS and the Materials Project, revealing the significant performance gain and modeling capacity beyond previous state-of-the-art ALIGNN.** We included the comparison on formation energy-MP, formation energy-JARVIS and total-energy JARVIS in Section. 6.2 of the revised paper due to space limit. All the results can be added to the main paper if you suggest so. In addition, Matformer is 3 times faster than ALIGNN, thus is much more efficient.\n\n> Reference\n\n[1] Chanussot, Lowik, et al. \"Open catalyst 2020 (OC20) dataset and community challenges.\" ACS Catalysis 11.10 (2021): 6059-6072.\n\nWe continue at the 3rd part as below:\n",
" > 5. Experiments only evaluate MAE. More metrics can be added.\n\nThank you for bringing this up. \n\n- For the evaluation metrics, we directly follow previous works (CGCNN, MEGNET, SchNet, ALIGNN, etc. ) for crystal property prediction and use MAE as evaluation metric. \n\n- Following your suggestion, we revised the paper and added energy within threshold (EwT) as an evaluation metric in Section 6.2. `More results of prediction within threshold as provided above can be added if you suggest so.` We think the metrics EwT and wT are totally different from MAE (also RMSE). EwT and wT evaluate the percentage of precise predictions within a tight threshold, and can be used to demonstrate the percentage of useful predictions of a prediction model for crystals. These two metrics are pretty useful in practice. MAE and RMSE are similar, and they both somehow evaluate the divergences in distributions between the whole predictions and ground truth. Hence, we do believe EwT is better due to a new perspective, and we may add results on RMSE in the next version.\n\nFor Questions:\n\n> 1. I belielive $l_1, l_2, l_3$ are 3d vectors with only one dim non-zero?\n\n- As we mentioned in the original version in Section 2, crystals usually possess irregular shapes in practice and $l_1, l_2, l_3$ are not always orthogonal in 3D space.\n\n- Hence, each of $l_1, l_2, l_3$ is indeed a 3d vector. However, for each of them, all the three entries can be non-zero, because the lattice structure is not always cubic.\n\n> 2. It's better to include as a superscript in your denoting for the l-th layer\n\nThank you for your suggestion, we edited it in the revision in Section 4.2.\n\n> 3. What do you store in edge feature vector?\n\nAs we mentioned in Section 2 (line 70) in the original version and line 69 in the revised version, we use Euclidean distance as the initial edge feature. Also, in our original version, we mentioned in the Appendix Matformer Configurations section that we map the Euclidean distance to a 128-dimensional embedding using 128 RBF kernels with centers from 0.0 to 8.0.\n\n> 4. In Table 1, it seems that the numbers from your reproduction and the original paper differ by a lot. Why is that?\n\n- Firstly, as stated in the original paper in the experimental results section (Sec 6.2) for The Materials Project, we noticed **previous works actually compared with each other with different random splits and even different numbers of training samples.** We think **these comparisons are not fair and honest enough.** For example, CGCNN only uses 28046 training samples for the task of Formation Energy in the Materials Projects, and the original MAE result is 0.039. To make the comparison between all baselines fair, we retrained them using exactly the same data split and also tuned some hyperparameters including learning rates and optimizer choices, to obtain the best results for each baseline model.\n\n- Beyond this, **our retrained results for baseline methods are better than their reported results in most cases.** We think the gaps are partially due to different data splits as stated above.\n\n\n> 5. Why GATGNN etc. do not show up in Table 3?\n\n- Firstly, for the JARVIS dataset, we directly follow the experimental settings of ALIGNN, and baselines including GATGNN are not included in ALIGNN settings in their paper. \n\n- However, following your suggestion, we **retrained and reimplemented GATGNN, MEGNET and SchNet for these five tasks in JARVIS.** We revised the paper and reported those results in Table 2.\n\n> 6. Whether to add the angle information is always debatable in this field. ALIGNN often becomes the second best numbers and it encodes angles. But you claimed angles \"induce high complexity\", could you clarify on this?\n\nThank you for bringing this question up. \n\n- Firstly, as analyzed in section 6.3 and Appendix A.6 in the revised versions, adapting angular information introduces large complexity. **We show that for the original crystal graph with n nodes and 6n edges, the corresponding line graph with angles will have 6n nodes and 66n edges, leading to high computational cost.**\n\n- Also, following your suggestion, we conducted **running time experiments when angular information is included**, the details can be found in the above answers for Cons. 2. We also revised the paper and added those results along with analysis in ablation study in Section 6.3 and Appendix A.6.\n\n\n> 7. For each material property, do you train a separate model? or you use one learnt representation for all?\n\nThank you for bringing this question up. \n\n- As we described in Section 5.1 in the original version and section 6.1 in the revised version, we slightly adjust learning rates and training epochs for different tasks, and we provide detailed hyperparameter configurations for different tasks in Appendix Matformer Configurations. \n\n- So yes, **following previous works (they did the same)**, we train a separate model for each material property.\n\nWe continue at the 4th part as below:",
" For limitations:\n\n> 1. I do believe it would be better for authors to include the discussion for the comparison between Euclidean neural nets and graph neural nets in encoding the crystals.\n\nThank you for the suggestion. \n\n- We **added the discussion for the comparison between Euclidean neural nets (Nequip) and our Matformer in the related work section in the revision.**\n\n- Beyond this, we also **reimplemented Nequip for JARVIS five tasks**, and the results are shown in above answers for Cons 1. We can add the results in Table 2 in the paper if you suggest so.\n\n> 2. Some interpretations on the results can help me to better understand the performance gain.\n\nThank you for bringing this up. \n\nWe **added the EwT and wT comparisons with previous SOTA ALIGNN** as described in answers for Cons 4. **Overall, it can be seen from metrics of EwT, wT, and MAE that our Matformer outperforms ALIGNN consistently under different metrics.** As discussed in details in Cons 4, **better EwT and wT means our methods can generate more predictions which are practically useful.**\n\n> 3. More evaluation metrics would make it more convincing.\n\nThank you for bringing this up.\n\nWe **added EwT as the evaluation metric** in the main paper in Section 6.2, and **more results of EwT and wT** are shown in the answers for Cons 4.\n\nHope we have addressed your concerns well, and we are looking forward to your reply. Thanks!\n\n\n",
" Thank you for your review and insightful comments. We genuinely appreciate your recognition of the originality, quality, and significance of our paper! For your concerns about the potential weaknesses, we provide a point-wise response as below. We also revised the paper accordingly.\n\nFor weaknesses and questions stated above:\n> 1. One of your direct competitors is Graphormer, but there are no comparisons, why?\n\nThank you for the suggestion. \n\n- Firstly, as mentioned in Section 3.1 (Significance of periodic invariance) in the original version, **Graphormer breaks periodic invariance when applied to periodic graphs**. In Table 5 of the original version & section 6 of the revised version, we **compared two graph construction methods used by Matformer and Graphormer. For fair comparison, the graph construction method used by Graphormer is implemented on top of the Matformer network architecture, denoted as OCgraph in the paper.** Experiments are conducted on JARVIS Formation Energy task to show the influence of breaking periodic invariance. The performance of OCgraph is 0.0530 while our Matformer is 0.0325 (lower is better), which is a significant difference.\n\n- Besides, **following your suggestion, we also conducted experiments compared with the original Graphormer using their official code** on Jarvis dataset Formation Energy task, with some modifications to make it fit in the material data on a single A6000 GPU. The results are shown in the table below. It is worth mentioning that training Graphormer with 1 block and 4 layers on JARVIS formation energy for 500 epochs needs 5 days on one A6000 GPU. The test MAE result for Graphromer is 0.1389, which is worse than all the baseline methods. **We also notice that the training process is unstable for Graphormer, and the training loss is easy to surge. This bad performance and unstable training strongly demonstrate the importance of periodic invariance.** `We can add those results and analysis in the paper if you suggest so.`\n\n | JARVIS Formation Energy | Test MAE | Total Training Time|\n |----|----|----|\n |Matformer| 0.0325| 8.9 h|\n |Graphormer| 0.1389| 5 days|\n\n> 2. In line 183 you said \"... rigorously prove ...\", where is the proof?\n\nThank you for bringing this question up.\n\nFor this question, the proof is in Appendix A.2. Thank you for pointing this out and we added a reference in the main paper after that sentence in line 188 to make the proof easier to find.\n\n> 3. In line 46 you used $\\mathbb{N}$ for the categorical case... the natural number is infinite, do you have infinite classes?\n\nThank you for bringing this question up.\n\nWe edited it to {$1, 2, \\cdots, C$} in the main paper, where $C$ is the number of classes for categorical tasks. We work on the crystal representation learning, and in practice, there usually have finite categories for classification.\n\n> 4. In Eq 2 the definition of $q_{ij}^h$ consists of the concatenation of $q_i$ three times... is it correct?\n\nYes, your understanding is correct. \n\n- Firstly, as you may notice, there are **three parts of messages** from neighboring node $j$ to the center node $i$; those are, the node feature message from node $i$ and $j$, and the edge feature from edge $e_{ij}$. Hence, we formulate $q_{ij}^h$ to be the concatenation of $q_i$ three times to be in the same dimension with $k_{ij}^h$, which is the concatenation of $k_i, k_j$ and $e_{ij}^{h’}$. Importantly, by doing this, we let $q_i$ attend each of the three components in $k_{ij}^h$. As a result, we calculate the attention coefficients for the whole message containing adequate information from node $i$, $j$ and edge $e_{ij}$. This is because we compute a similarity score vector for each of $(q_i, k_i), (q_i, k_j)$, and ($q_i, e_{ij})$, as in Eq $2$. **We revised the texts following in Eq $2$ to clarify this.**\n\n- Besides, we also provided a detailed illustration of operations in Matformer Layer in Figure 2 of Appendix, which you may have already noticed. If any confusion still exists, please let us know. We are more than glad to make it clearer. \n\nWe continue at the 2nd part as below:",
" > 5. In line 215, you said that one of the differences with respect to Graphormer is that \" ... Graphormer treats every atom as a single node.\", but you are doing the same following the definition of $a_i \\in A$. Can you elaborate more?\n\nThank you for bringing this question up.\n\n- As we state several times in the paper, atom $i$ repeats itself infinitely (like it has infinite duplicates in 3D space) in a crystal structure. The sentence that \" ... Graphormer treats every atom as a single node.\" means Graphormer **does not consider the repeating positions** $p_i + k_1l_1 + k_2l_2 + k_3l_3$ for a given atom $i$ in the unit cell, but **just considers the position $p_i$ of it**. For sure, the same part lies in that, both Graphormer and our method preserve atom features for a given atom, as you said $a_i \\in A$ for each atom $i$. However, as we show in the paper, **even though preserving the atom features, ignoring duplicated positions for the same atom will still broke periodic invariance**, as illustrated in Figure 2 also. Hence, in our methods, we treat all the duplicates $p_i + k_1l_1 + k_2l_2 + k_3l_3$ of atom $i$ as the same node in graph construction, which is demonstrated satisfying periodic invariance.\n\n- Secondly, we do an **ablation study to investigate the graph construction method used by Graphormer**, marked as OCgraph in Table 5 in the original paper. For fair comparison, OCgraph is the combination of Graphormer’s graph construction method and Matformer’s architecture. Because of breaking periodic invariance, **the graph construction method used by Graphormer can not produce satisfactory results**. We can see the performance drops from 0.348 to 0.530 when exactly the same model architecture of our Matformer is used. \n\n- Additionally, following your suggestion, we **also conducted a comparison between the original Graphormer and Matformer**, as shown and analyzed in above answer 1 in part 1. The original Graphormer is even much worse than OCgraph.\n\nFor the limitations mentioned by reviewer ifKM:\n> 1. I wondered about the model's performance when no periodicity or repetitive patterns in the graph are present.\n\nThank you very much for letting us know your concerns. \n\n- For this question, as you may have noticed, in the main paper we provided the ablation study of breaking periodic invariance and dropping periodic encodings in Section 6.3.\n\n- For the model's performance without periodicity in the graph, for the same model architecture, using the graph construction method breaking periodic invariance (ignoring periodic information and treating it as a molecular graph) results in a 53% performance drop, and the MAE increased from 0.348 to 0.530 (lower is better). \n\n- For the model's performance without repetitive patterns in the graph, for the same model architecture, dropping periodic patterns encoding leads to a significant performance drop from 0.325 to 0.337. \n\nHope we have addressed your concerns well, and we are looking forward to your reply. Thanks!",
" We genuinely appreciate your review and insightful comments. Below we provide a point-wise response to address the weaknesses. We also revised the paper accordingly.\n\nFor weaknesses and questions stated above:\n> 1. Even stated several times, the proposed graph construction methods are similar to previous works. In this perspective, the novelty is a bit reduced.\n\nWe kindly disagree with this point. \n\n- Firstly, in this work, we **discover and formally define two important components for crystal graph construction**: periodic invariance and periodic patterns encoding. Firstly, **periodic invariance is of great importance for periodic structures like crystals, but is rarely noticed by previous works.** Treating atoms as individual nodes (like in molecular graphs) breaks periodic invariance. However, this strategy is widely used when handling periodic structures, even in some powerful methods such as Graphormer. **Breaking periodic invariance will result in different representations for the same crystal structure, which could confuse machine learning models and produce suboptimal performance.** Besides, when dealing with periodic crystals, no previous work tried to encode periodic patterns. Without encoding periodic patterns, the graph representation only captures a local unit (local structure), but fails to capture how the unit (local structure) expands itself in 3D space. **Such an important component is even not noticed, not to mention solved in existing works.** To this end, we believe the discovery and formal definition of periodic invariance and periodic patterns encoding are of great value, importance, and significance to the community.\n\n- More importantly, we also **propose two solutions to preserve periodic invariance**, and design **a new, elegant, and efficient way to encode periodic patterns** when constructing graphs. *Also, the proposed Matformer is demonstrated to be* **powerful** (outperforms previous SOTA consistently) and **efficient** (nearly 3 times faster in training and inference time). Due to the nature of period graphs, Matformer is very different from transformers for texts, images, and regular graphs. All the proposed methods in this work are new, effective, and efficient.\n\n- Overall, we think the novelty is sufficient due to 1). we discover, formally define, and propose approaches to resolve two vital components for crystal graph construction; and 2). we provide a novel, powerful, and efficient crystal learning model.\n\nWe continue at the 2nd part as below:",
" > 2. The performance improvement compared with the strong baseline ALIGNN is small. Given ALIGNN uses angular information, it would be interesting if the author can adapt angular information to the designed Matformer. If the author can further improve the current Matformer, this paper will become extremely good.\n\nThank you for your recognition. \n\n`Firstly,` about the performance gains beyond the strong baseline ALIGNN, **our Matformer consistently outperforms ALIGNN on two widely used material datasets for 9 different tasks in terms of MAE**, and is **three times faster** in total training time and nearly three times faster in reference time.\n\n`Secondly,` we want to particularly point out that, we aim to **achieve both effectiveness and efficiency**. Without using angles, our Matformer is three times faster than ALIGNN in training time, and better than ALIGNN in performance. **We further provided more analysis and experimental results as below and in Section 6.3 and Appendix A.6 in our revised paper.**\n\n- Firstly, **adapting angular information introduces large complexity.** Assume that we have $n$ atoms in a single cell and thus $n$ nodes in the original graph. Also assume that every node has at least 12 neighbors (following the graph construction method of Matformer) and there are no self-connecting edges. This will result in a graph $G=(V,E)$ where $|V| = n$ and $|E| = 12/2 n = 6n$. When converting graph $G$ into $L(G)$, every edge is treated as a node in the line graph. So we have $6n$ nodes in the line graph. Every edge in the original graph is connecting $2$ nodes and every node has $(12 - 1)$ other edges, resulting in $22$ neighboring edges for each edge. So we have $22*6n/2=66n$ edges in the converted line graph. Compared with the original graph with $|V| = n$ and $|E| = 6n$, ** the converted line graph with angles is super large with the node number of $6n$ and edge number of $66n$, which would induce high computational cost.** \n\n- Also, following your suggestion, we **conducted experiments to adapt angular information in Matformer .** We add two edge layers in the original Matformer to deal with extra angular information. We use cosine plus RBF (as done by ALIGNN) and SBF kernels to encode the angular information. The corresponding running times and final test MAE are shown in the following Table.\n\n\n |JARVIS Formation Energy|MAE|Time Per Epoch|Total Training Time|\n |----|----|----|----|\n |Matformer|0.0325|64 s|8.9 h|\n |Matformer + Angle SBF|0.0332|173 s|23.8 h|\n |Matformer + Angle RBF|0.0325|165 s|22.9 h|\n\n- It can be seen from this table that introducing angular information by using line graphs will result in **nearly 3 times training time per epoch and in total, without much performance gain.** The main reason that ALIGNN (also the famous model DimeNet [1] for molecules) uses angles lies in that, angle information helps determine the shape of a geometric graph thus improving the model performance. We think periodic invariant graph construction add periodic patterns encoding in Matformer *already contain sufficient information to distinguish different crystals.* We revised the paper and added those results along with analysis in ablation study in Section 6.3 and Appendix A.6.\n\n- We do believe it is an excellent point to improve the current Matformer by introducing angular information properly to satisfy both periodic invariance and to encode periodic patterns with relatively low time complexity. The research on periodic graphs is new but promising, and there still exist a lot of ideas worth exploring. We aim to tackle them (especially integrating the angle information effectively and efficiently) in future research.\n\n> Reference\n\n[1] Klicpera, Johannes, Janek Groß, and Stephan Günnemann. \"Directional message passing for molecular graphs.\" ICLR 2020.\n\nWe continue at the 3rd part as below:",
" `Thirdly, beyond this,` to show the significant performance gains of our Matformer beyond ALIGNN, we also **added two new metrics**, including energy within threshold (EwT) and prediction within threshold (wT) for evaluation. These metrics are from paper [1], and are **well recognized by the community. This is because they provide a different perspective from MAE (or RMSE), and are meaningful in practice**. For example, EwT measures the percentage of estimated energies that are likely to be practically useful when the absolute error is within a certain threshold, and this threshold could be 0.01 or 0.02.\n**We then conducted comparison with ALIGNN in terms of EwT and wT to better demonstrate the performance gains.**\n\n- **For all the tasks in JARVIS and formation energy and bandgap tasks in the Materials Project, we use the official prediction results for test sets from ALIGNN to calculate corresponding results. The bulk moduli and shear moduli are not included because no official results from ALIGNN are released.** We use EwT 0.02 to denote energy prediction error within the threshold of 0.02 and EwT 0.01 to denote energy prediction error within the threshold of 0.01 for tasks of formation energy and total energy. We use wT 0.02 to denote prediction error within the threshold of 0.02 and wT 0.01 to denote prediction error within the threshold of 0.01 for other tasks. Results are reported below.\n\n | Formation Energy JARVIS | EwT 0.01 | EwT 0.02 | Formation Energy MP | EwT 0.01 | EwT 0.02 |\n |----|----|----|----|----|----|\n |ALIGNN|39.59%|59.64%||49.94%|71.10%|\n |Matformer|41.17%|60.25%||55.86%|75.02%|\n\n- It can be seen from this table that Matformer has more accurate predictions for crystals. Interestingly, the performance gains of our Matformer beyond ALIGNN in terms of EwT mainly **come from more accurate energy predictions within an absolute error of 0.01.** This indicates that Matformer generates more accurate predictions when the prefixed error (threshold) is more strict. Compared with JARVIS, the Materials Project has 15422 more training samples. With more training samples available, the percentage of predicted energies obtained by Matformer within 0.01 increases by 14.69%, which is significantly better than ALIGNN, revealing **the huge potential of Matformer when larger crystal dataset is available.** We also revised Section 6.2 of the paper and added experiments and discussions on the new metric EwT.\n\n | Band Gap (MBJ) JARVIS | wT 0.01 | wT 0.02 | Band Gap (OPT) JARVIS | wT 0.01 | wT 0.02 |\n |----|----|----|----|----|----|\n |ALIGNN|18.10%|31.87%||48.73%|61.79%|\n |Matformer|31.70%|43.64%||56.31%|64.02%|\n\n- It can be seen from the table that there are 13.6% gain in terms of wT 0.01 for Band Gap (MBJ) and 7.58% gain in terms of wT 0.01 for Band Gap (OPT), which are very significant.\n\n\n | Ehull JARVIS | wT 0.01 | wT 0.02 | Total Energy JARVIS | EwT 0.01 | EwT 0.02 |\n |----|----|----|----|----|----|\n |ALIGNN|22.92%|39.52%||35.09%|55.20%|\n |Matformer|28.37%|44.81%||36.84%|57.36%|\n\n\n | MP-BandGap | wT 0.01 | wT 0.02 | \n |----|----|----|\n |ALIGNN|19.79%|30.15%|\n |Matformer|27.01%|35.22%|\n\nOverall, **Matformer outperforms ALIGNN in terms of MAE, EwT 0.01, EwT 0.02, wT 0.01, and wT 0.02 for all these seven tasks in JARVIS and the Materials Project, demonstrating that our Matformer is more powerful (in different metrics thus from different perspectives) than previous state-of-the-art ALIGNN.** We included the comparison on formation energy-MP, formation energy-JARVIS, and total energy-JARVIS in Section 6.2 of the revised paper due to space limit. **All the results can be added to the main paper if you suggest so.** Additionally, **Matformer is three times faster than ALIGNN and nearly 3 times faster than Matformer + Angle.** When angular information is added to our Matformer, not much performance gain is shown. We think **periodic invariant graph construction and periodic patterns encoding in Matformer already contain sufficient information to distinguish different crystals.**\n\nHope we have addressed your concerns well, and we are looking forward to your reply. Thanks!\n\n> Reference\n\n[1] Chanussot, Lowik, et al. \"Open catalyst 2020 (OC20) dataset and community challenges.\" ACS Catalysis 11.10 (2021): 6059-6072.",
" The paper studies representation learning for periodic objects such as crystal. The paper formally defines the periodic invariance property and periodic pattern encoding. To achieve both periodic invariance and periodic pattern encoding, the author proposes two type of constructions of local graphs, modified from existing methods (CGCNN and Graphnormer). The author further develops a proper message passing based transformer over the proposed graph construction method. Overall the paper is well-written and informative for representation learning on periodic objects. The experimental result shows the effectiveness of their designs. Strengths:\n1. The written is extremely clear and informative in terms of analysis and literature review. Reading the paper is a kind of joy. The author also clearly state the difference with respect to several similar previous works.\n2. The proposed method achieves all invariant properties desired. The design is simple, clean, and meaningful. \n3. The proposed properties for periodic objects are formal and informative. if these properties are proposed the first time here, this is a good contribution. \n\nWeaknesses:\n1. Even stated several times, the proposed graph construction methods are similar to previous works. In this perspective, the novelty is a bit reduced. \n2. The performance improvement comparing with the strong baseline ALIGNN is small. Given ALIGNN uses angular information, it would interesting if the author can adapt angular information to the designed Matformer. If the author can further improve the current Matformer, this paper will become extremely good. Please see previous comments. NA",
" In this paper, the author/s proposed a graph transformer architecture, named Matformer, to infer the physical properties of crystal materials in which their atomic representation is encoded as graphs. The Matformer architecture encodes repeating patterns and is invariant to periodicity; these two properties are fundamental in modeling crystal materials. The performance are assessed on standard benchmarks, considerably outperforming the baselines. **Strengths:** \n\n*Originality*\nProposing a model that is invariant to pattern periodicity and that it explicitly captures repetitive patterns in a graph is a novelty.\n\n\n*Quality*\nThe quality of this work is good; almost all proofs are present in the paper or in the appendix. The experimental part is clear and the reference adequate.\n\n*Significance*\nI think this work is significant for the community since it addresses specific properties of graphs that are not commonly studied.\n\n**Weaknesses:** \n* One of your direct competitors is Graphormer, but there are no comparisons, why?\n* In line 183 you said \"... rigorously prove ...\" , where is the proof?\n* In line 46 you used $\\mathbb{N}$ for the categorical case... the natural number is infinite, do you have infinite classes?\n* In Eq 2 the definition of $q_{ij}^h$ consists of the concatenation of $q_i$ three times... is it correct?\n* In line 215, you said that one of the differences with respect to Graphormer is that \" ... Graphormer treats every atom as a single node.\", but you are doing the same following the definition of $a_i \\in \\mathbf{A}$. Can you elaborate more? My questions are reported in the weaknesses I wondered about the model's performance when no periodicity or repetitive patterns in the graph are present.",
" This paper considers the property prediction tasks for crystal materials. This problem can be viewed as predicting a global target for a graph. The uniqueness is that the graph is periodic. The graph crystal structure repeats in the 3D space. Past works didn't really emphasize on this periodic invariance, or handle the periodic pattern encoding. The most commonly seen strategy is to set a fixed radius for an atom, and top $t$ closest neighbor atoms to this atom are connected to this atom with an edge. Such design implicitly considers the periodicity of the crytals. However, in some corner cases, such invariance may not be guaranteed. To this end, this paper proposes to explicitly capture the periodicity through self-connecting edges. Intuitively, the self-connecting edges connect one atom with its nearby duplicates. Since it is in the 3D space, there are multiple duplicates in different directions. So, multiple self-connecting edges are added to model the periodicity. The resulting model, Matformer, outperforms other GNN-based methods on two datasets, materials project and JARVIS. Pros:\n1. The paper is written clearly. Though there are some typos, the general idea and methods are easy to follow. And the real-world problem is also important in materials science.\n2. The ablation study is presented. I think the key argument of this paper is the periodic invariance. So proving the importance of the periodic module is critical.\n3. Multiple datasets are included for validation. \n\nCons:\n1. Essentially, the paper extends the existing graph based methods for crystal property prediction. However, another direction is Euclidean neural networks. These models are not compared in this paper, such as E3NN.\n2. The authors decide to drop the angle information due to high complexity. But ablation study should be provided to support this claim.\n3. As I mentioned, the ablation study is critical for performance and operations. It would be better to expand the ablation study section and move the dataset descriptions to appendix.\n4. From the comparison numbers, the improvements over others like ALIGNN are not that significant. \n5. Experiments only evaluate MAE. More metrics can be added. 1. I belielive $\\ell_1, \\ell_2, \\ell_3$ are 3d vectors with only one dim non-zero?\n2. It's better to include $\\ell$ as a superscript in your denoting $f^*_i$ for the $\\ell$-th layer\n3. What do you store in edge feature vector?\n4. In Table 1, it seems that the numbers from your reproduction and the original paper differ by a lot. Why is that?\n5. Why GATGNN etc. do not show up in Table 3?\n6. Whether to add the angle information is always debatable in this field. ALIGNN often becomes the second best numbers and it encodes angles. But you claimed angles \"induce high complexity\", could you clarify on this?\n7. For each material property, do you train a separate model? or you use one learnt representaion for all? 1. I do believe it would be better for authors to include the discussion for the comparison between Euclidean neural nets and graph neural nets in encoding the crystals.\n2. Some interpretations on the results can help me to better understand the perforamnce gain.\n3. More evaluation metrics would make it more convincing. ",
" Periodic graph is ubiquitous in the real world, such as crystal material. Representation learning on periodic graphs is a significant task for downstream tasks such as property prediction. This paper proposes a new transformer-based framework for the representation learning on periodic graphs. Specifically, it comes up with a new strategy to construct graphs and a way to encode lattice matrix L. The experiments section shows the effectiveness of the method. Strength\n1. The paper formularizes the periodic graph in a clear way using A, P and L matrix.\n3. The experimental results show the effectiveness of the proposed method.\n\nWeakness:\n1. The motivation of the paper can be further improved:\n a) Why do we need to construct a graph from the unit-cell coordinates rather than directly using coordinates to conduct representation learning? What's the benefit to borrow the graph structure here? I believe there are a few works for representation learning on 3D positions such as [1], [2] and [3]\n b) What is the motivation to form self-connecting edges to contain the information of lattice matrix? Why not just concatenate it with the representation of a single unit cell?\n2. It seems that section 3.1 and 3.2 can be moved to section 2 or they can just serve as a single section, since it seems that they are the motivation/theoretical ground of the proposed model not the model itself.\n3. Line 45, the paragraph is titled as \"Crystal property prediction\" but it seems that the whole paragraph is about the definition of peridic lattice. The title might be modified accordingly.\n4. It seems that E(3) invariance has been mentioned even in the abstract but it is formally defined in section 3.1.\n5. Line 87, it mentions that \"the structure f a cell remains the same when ...\". Does the structure here mean the the output of the \"f\" function mentioned in the Definition 1 above?\n6. Line 90, It says \"periodic invariance is also shown necessary for generating valid crystal representation\". Why?\n7. In table 2 the paper compares the time complexity of the proposed method with ALIGNN. It would be great if some complexity analysis is provided here.\n8. Some relevant works [4, 5] regarding the periodic graphs generation can be discussed in the related works section.\n9. The code is not published for now to reproduce the results.\n\n\n[1] Xu, Minkai, et al. \"Geodiff: A geometric diffusion model for molecular conformation generation.\" arXiv preprint arXiv:2203.02923 (2022).\n[2] Kim, Seohyun, Jaeyoo Park, and Bohyung Han. \"Rotation-invariant local-to-global representation learning for 3d point cloud.\" Advances in Neural Information Processing Systems 33 (2020): 8174-8185.\n[3] Court, Callum J., et al. \"3-D inorganic crystal structure generation and property prediction via representation learning.\" Journal of chemical information and modeling 60.10 (2020): 4518-4535.\n[4] Xie, Tian, et al. \"Crystal diffusion variational autoencoder for periodic material generation.\" arXiv preprint arXiv:2110.06197 (2021).\n[5] Wang, Shiyu, Xiaojie Guo, and Liang Zhao. \"Deep Generative Model for Periodic Graphs.\" arXiv preprint arXiv:2201.11932 (2022). I would suggest the author address the questions mentioned under the \"Weakness\" in the \"Strengths And Weaknesses\" section of the review. The limitations of the model are not mentioned, the negative societal impacts of your work are marked as N/A."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"tJ4luCl2umg",
"K_-LLgDqFz",
"B0LwuUcUIFfa",
"DtJPJjPy8GZ",
"DtJPJjPy8GZ",
"covwb7ZpzUSH",
"tJ4luCl2umg",
"3qO8_-p7BPk4",
"iVrQ65jjN2k",
"iVrQ65jjN2k",
"iVrQ65jjN2k",
"g9BjlI_U2wz",
"qWTUs78AOO0",
"qWTUs78AOO0",
"qWTUs78AOO0",
"qWTUs78AOO0",
"4_dMLdjd7SO",
"4_dMLdjd7SO",
"4_dMLdjd7SO",
"4_dMLdjd7SO",
"tJ4luCl2umg",
"tJ4luCl2umg",
"84WLKF3wUbi",
"84WLKF3wUbi",
"84WLKF3wUbi",
"nips_2022_pqCT3L-BU9T",
"nips_2022_pqCT3L-BU9T",
"nips_2022_pqCT3L-BU9T",
"nips_2022_pqCT3L-BU9T"
] |
nips_2022_2EBn01PJh17 | Adaptive Cholesky Gaussian Processes | We present a method to fit exact Gaussian process models to large datasets by considering only a subset of the data. Our approach is novel in that the size of the subset is selected on the fly during exact inference with little computational overhead. From an empirical observation that the log-marginal likelihood often exhibits a linear trend once a sufficient subset of a dataset has been observed, we conclude that many large datasets contain redundant information that only slightly affects the posterior. Based on this, we provide probabilistic bounds on the full model evidence that can identify such subsets. Remarkably, these bounds are largely composed of terms that appear in intermediate steps of the standard Cholesky decomposition, allowing us to modify the algorithm to adaptively stop the decomposition once enough data have been observed. Empirically, we show that our method can be directly plugged into well-known inference schemes to fit exact Gaussian process models to large datasets. | Reject | This paper proposes some nice ideas on speeding up Gaussian process inference based on approximating the marginal using subsamples. However, several reviewers noted gaps and potentially flaws in the technical details. The reviews as well as detailed replies during the rebuttal period will help the authors prepare a stronger revision, but the work is not airtight and is not ready for publication in its current form | val | [
"y_SE3AFLsJx",
"EcccFK9X9Ua",
"i6KDnDEaiw",
"ersnQf7N9Vo",
"XxkndtIIUzD",
"zi3vHFaops",
"FF5IFMI6KwG",
"nyVK0s2jFOg",
"l186Q1BbZ7m",
"-xrhce6ozAp",
"L11pV5T9H1v"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers,\n\nThank you for your feedback on our paper. Having rerun our experiments, we have now uploaded a revised manuscript. In particular, we would like to highlight the changes to Figure 2, where we have corrected a mistake in the timing of the exact GP. The figure now clearly shows that the overhead of ACGP compared to standard Cholesky decomposition is negligible. \n\nPlease note that the exact-log-likelihood plots for the hyper-parameter tuning experiments are not yet updated, as these computations take longer. They will be updated for the camera-ready version.\n\nAgain, thank you all for your questions and comments. We hope our discussions clarified your questions and added to the soundness of our work. With all concerns addressed, we hope this will reflect in your updated score.",
" Thank you for continuing the discussion.\n\n> Your counter-example isn't one. If $y_i$ are independent then they are decorrelated, and their covariance matrix ought to be diagonal, by definition! If you denote $K$ their covariance matrix, it ought to be diagonal...\n> In the log-likelihood of a multivariate Gaussian, the matrix **IS** the covariance matrix of the Gaussian vector. You cannot decouple the nature of from the distribution of $(y_1, …, y_n)$.\n\nYou are right that (for our example) the kernel matrix *ought* to be diagonal, but model misspecification happens. The true covariance of the samples can be different from the kernel matrix. Considering again our example: during the course of hyper-parameter tuning, the matrix $K$ would certainly turn more diagonal, but not from the start. \n\nThe kernel function reflects our believe about how the true function values relate as a function of the inputs. Even though we condition implicitly on this true function, we do not know this relationship. The independence assumption on the dataset is not a contradiction. \n\nPut differently, since the kernel function is specified to capture our belief about the true underlying function, it makes sense to choose kernel functions that do not result in diagonal kernel matrices—even in the frequentist setting.\n\n\n> Once more, even if you shuffle the dataset $(y_1, …, y_n)$, the shuffled versions $y_{\\pi_i}$ are identically distributed but **NOT** independent. Hint: if $y_i$ are highly correlated, so will $y_{\\pi_i}$.\n\nOnce again, we absolutely agree with you: shuffling the data does not establish strict i.i.d.-ness. However, that this is not an issue is discussed concisely in the following half page article: https://web.ma.utexas.edu/users/parker/sampling/repl.htm\n\nSampling with replacement would establish i.i.d.-ness, and the article discusses why there is only little difference to sampling without (i.e., shuffling).\n\n\n> Randomly sampling with replacement violates much of the narrative in your paper (e.g. sequentially going through the sample, the incremental decomposition of Eq (2), etc.).\n\nIf we can now agree that the difference between sampling with or without replacement becomes negligible for large datasets, you will maybe acknowledge that it does not violate the narrative of our paper.\n\nThank you for a great discussion on the fundamental principles of the frequentist theory underlying our work. We hope the other reviewers and the AC also benefit from these clarifications.\n",
" You might be confused. \n\nYour counter-example isn't one. If $y_i$ are independent then they are decorrelated, and their covariance matrix ought to be diagonal, by definition! If you denote $K$ their covariance matrix, it ought to be diagonal... \n\nIn the log-likelihood of a multivariate Gaussian, the matrix $K$ **IS** the covariance matrix of the Gaussian vector. You cannot decouple the nature of $K$ from the distribution of $(y_1, \\dots, y_n)$.\n\nOnce more, even if you shuffle the dataset $(y_1, \\dots, y_n)$, the shuffled versions $y_{\\pi_i}$ are identically distributed but **NOT** independent. Hint: if $y_i$ are highly correlated, so will $y_{\\pi_i}$.\n\nRandomly sampling with replacement violates much of the narrative in your paper (e.g. sequentially going through the sample, the incremental decomposition of Eq (2), etc.).",
" Thank you for your follow-up questions. We genuinely think we are in agreement, but that we might be talking past each other. \n\n\n> If you work conditional on $f$ or assume $f$ is deterministic, then $(y_i)$ would be independent, $K$ would be diagonal, […]\n\nTo make an example where $K$ is not diagonal, but the $y$’s are independent:\nConsider for instance $N$ $x$’s and $y$’s sampled from a 1D zero-mean, unit-variance Gaussian. These $x$’s and $y$’s will be i.i.d., but for a choice of kernel of, say, a squared exponential with length scale 1, the kernel matrix $K$ will not be diagonal. \n\nThus, even if the $y$’s are independent, $K$ may not be diagonal. We hope we can agree on this example.\n\n\n> If you are concerned with speeding up the evaluation of the log-marginal likelihood, then you cannot possibly claim that $(x_i, y_i)$ are i.i.d. unconditionally.\n\nWe very much agree with this! As we pointed out in our response to your initial review, we condition implicitly on the true underlying function $f$.\n\nPlease note that the i.i.d. assumption does not carry over to the elements of the kernel matrix. Writing $y_n=f(x_n)+\\epsilon_n$, one can see, for example for the quadratic form, that analyzing the expectation of\n\n$$\\vec y^\\top K^{-1}\\vec y=[f(x_1)+\\epsilon_1, \\ldots, f(x_N)+\\epsilon_N] \\begin{bmatrix}k(x_1,x_1)+\\sigma^2 & k(x_1, x_2) & \\ldots\\\\\\\\ \\vdots & \\ddots \\end{bmatrix}^{-1}[f(x_1)+\\epsilon_1, \\ldots, f(x_N)+\\epsilon_N]^\\top$$\n\nis non-trivial, due to the application of the kernel function. Coming back to the earlier point: independent $x$’s can have non-zero kernel function values giving rise to non-zero off-diagonal entries.\n\n\n\n> Simply put: $(y_i, x_i)$ i.i.d => $(y_i)$ i.i.d. => $(y_i)$ independent => $(y_i)$ decorrelated => $K$ diagonal => no marginal likelihood computation problem. (=> : implies).\n\nWe agree with all the implications except for the next to last one (=> $K$ diagonal) for the reasons discussed in the counter-example above. That the dataset entries (conditioned on the true function) are i.i.d., does not enforce a diagonal kernel matrix. \n\nTo reiterate, even independent $x$’s can have non-zero kernel function values, since two independent points can (and probably will) have non-zero Euclidean distance. \n\n\n> I think you are confusing the randomness inherent to regression models with the randomness you need for this approach. What you want here is a (sample) randomness that guards against any specific/peculiar order in which samples are sequentially picked.\n\nWe are not sure that we understand what you mean by “model randomness” and “sample randomness”. If by “sample randomness” you are referring to randomness in the dataset, then this is exactly the i.i.d. assumption, the randomness that our algorithm and proofs rely on. In this sense, the i.i.d. assumption is our guard against peculiar dataset ordering. \n\nThe primary goal of our approach is to computationally exploit sample randomness (i.i.d. dataset) beyond prior and likelihood (model randomness) if the former is present. If we misunderstood your comment, please elaborate on randomnesses you are referring to.\n\n\n> Rigorously, you want to introduce a random (and uniform) permutation $(\\pi_i)$ of $(1, \\dots, N)$, and apply your algorithm to $(y_{\\pi_i}, x_{\\pi_i})$, not to $(x_i, y_i)$.\n> Note that, even then, $(y_{\\pi_i}, x_{\\pi_i})$ would be identically distributed but not quite independent! Counter-examples are easily constructed.\n\nPractically speaking, this is exactly what we do in our experiments.\n\nTheoretically, we agree that introducing a random permutation would be the more rigorous way of proving that our algorithm works for our experiments. However, sampling-without-replacement results are generally hard to obtain. Instead, we rely on the i.i.d. assumption, which is standard practice in the frequentist literature. However, with increasing dataset size, sampling with replacement (which satisfies the i.i.d. assumption) and sampling without (random permutations) become more and more the same. Our experiments (and many before us) validate this empirically.\n\n\n> To sum-up the i.i.d. assumption is not admissible. Even if you shuffle your $N$ samples, the identically distributed assumption would become admissible but not the independence assumption.\n\nWe hope that we have convinced you that the i.i.d. assumption (exemplified with our 1D Gaussian toy example above) does not invalidate the GP model. Assuming data to be i.i.d. is common in the frequentist world, and is how proofs are typically made. Within this framework, our theory and proofs hold. The i.i.d. setting can be established in practice (at least approximately for large datasets) by shuffling the data. This is what our algorithm relies on and why we make sure to shuffle the data in our experiments. We will make this clearer in the paper.\n\n\n\n\n",
" Thanks for your response. \n\n**Re: The i.i.d. assumption is admissible**. \n\nBeing frequentist here isn't a matter of choice. If you work conditional on $f$ or assume $f$ is deterministic, then $(y_i)$ would be independent, $K$ would be diagonal, and the marginal likelihood would be easy to evaluate. If you are concerned with speeding up the evaluation of the log-marginal likelihood, then you cannot possibly claim that $(y_i, x_i)$ are i.i.d. unconditionally.\n\nSimply put: $(y_i, x_i)$ i.i.d => $(y_i)$ i.i.d. => $(y_i)$ independent => $(y_i)$ decorrelated => $K$ diagonal => no marginal likelihood computation problem. (=> : implies).\n\nI think you are confusing the randomness inherent to regression models with the randomness you need for this approach. What you want here is a (sample) randomness that guards against any specific/peculiar order in which samples are sequentially picked. \n\nRigorously, you want to introduce a random (and uniform) permutation $(\\pi_i)$ of $(1, \\dots, N)$, and apply your algorithm to $(y_{\\pi_i}, x_{\\pi_i})$, not to $(y_i, x_i)$. Note that, even then, $(y_{\\pi_i}, x_{\\pi_i})$ would be identically distributed but not quite independent! Counter-examples are easily constructed.\n\nTo sum-up the i.i.d. assumption is not admissible. Even if you shuffle your $N$ samples, the identically distributed assumption would become admissible but not the independence assumption.\n",
" Thank you for your careful review and great feedback. We are happy that you see the potential of our work.\n\n**Weaknesses**\n\n*Point 1:*\nWe are addressing this point with our experiments in Section 4.1, for which more results can be found in Appendix B.3. We conjecture that our bounds can be improved since we did not observe any failures.\n\n*Point 2:*\nFirst, kindly note that, in total, we have investigated eight datasets, cf. Table 1 in Appendix A.\n\nAlso, thank you for acknowledging (in Limitations) that we aim to show the limitations of our work. We believe that these limitations are more important to show because synthetic settings where our method outperforms all other approaches are easy (almost trivial) to devise. You already gave one example yourself (very large scale datasets where small samples can be sufficient to approximately determine the posterior). Further examples where ACGP shines:\n1) Consider a synthetic, densely-sampled and extremely large dataset where even $O(N)$ operations are intractable. Taken to the extreme: the dataset consists of a single datapoint, copied $N$ times. In this case, our algorithm will stop after processing $M=m=10240$ datapoints.\n2) Consider an expensive kernel function, for example string or graph kernels. Again taken to the extreme: a kernel that blocks for $t$ seconds. Making $t$ large enough, our approach can beat any competitor just by saving kernel evaluations.\nWe will point out these advantages more clearly.\n\n*Point 3:*\nWe were aiming to show a more detailed picture; however, we agree that seeing the bounds on the actual log marginal likelihood is also important. We will add such plots to the revised paper, which will be uploaded in a couple of days.\n\n**Questions**\n\nThank you for pointing us to a peculiarity. It appears we inherited a bug from the CGLB optimization procedure (https://github.com/awav/CGLB/blob/main/cglb/backend/pytorch/interface.py#L327). When optimization restarts, it does so from the last assigned parameter values. If the last step was a failed/crashed linesearch, the objective function may increase. We are rerunning our experiments making sure that optimization restarts only from accepted parameter values. At this point, it appears that fixing this mistake is to our advantage since it was mostly ACGP suffering from bad restarts. In the RMSE plot, the spike has disappeared. The $-\\log p(y)$ computations are still running, and we will update all plots in the revised paper as soon as the computations are done.\n\n\n**Limitations**\n\n*Point 1:*\nJust above Section 2, we state that our stopping criterion induces an $O(M)$ overhead. For the experiments in Section 4.1 (and Appendix B.3), the overhead is, in fact, less than it appears in the manuscript: we forgot to add the time to construct the kernel matrix to the timings of the exact GP, which almost doubles the compute time for this baseline. This makes the compute time of the exact GP much more comparable to that of ACGP, showing that the actual overhead of ACGP is little. We will update all plots in the revised paper. \n\nIn Section 4.2, exact inference outperforms as well our strongest competitor: CGLB. A surprising insight from our experiments is that, although exact inference is expensive, the costs can amortize over the course of optimization. We believe this is an important message for the GP community.\n\n*Point 2:*\nFor the experiments in Section 4.1, we checked if the empirical quantities satisfied the assumption, which we can confirm. We will add this information below the statement of the assumption.\n\nThe assumption essentially says that the mean prediction error decreases with more data. We suspect that one can construct kernel functions for which this is not the case (hence, we need to assume), but a model that becomes worse with more data is unlikely to be an interesting model anyway. \n",
" Thank you for your review and the discussion points you raise.\n\na) Our proof sketch in Appendix D.3 may help to clarify this question. Particularly the paragraph from line 160 addresses your point: the remaining posterior predictions are correlated, which is why we look at bounds that remove this correlation.\n\nb) We investigate this question in Section 4.1, for which the complete results can be found in Appendix B.2.\n\nc) Note that $M$ is not a choice but an adaptive stopping time that depends on dataset and kernel. We describe how it is chosen in Section 3.2. Also $s$ is not a choice but defined via $M$ and the sample set size $m$ as $s:=M-m$. Indeed an open question is how large $m$ should be. It must be sufficiently large to obtain reliable bounds. With $m \\approx 10000$, our bounds hold in all experiments. Just above Section 3.5, we describe that we could obtain high-probability guarantees if we could define $s$ as a stopping time, meaning independent of $M$. However, it is unclear how one would define such a stopping time. With this submission, we address the question: when can we stop computing? The choice of $s$ is the answer to the question: when can we start to consider stopping? One purpose of this submission is to get the community interested in answering this question together.\n\nd) The dimensionality of the dataset does not enter at any point in our equations. Please see table 1 in Appendix A for sizes and dimensionalities of the datasets that we considered in the experiments and Appendix B.3 for the behavior of the bounds in these datasets.\n\ne) In general, reliably identifying one model as better than another is only possible if the approximation bounds for both models do not overlap; that is, the lower bound for one model is better than the upper bound for the other. In the probabilistic case, for example with stochastic variational inference, bounds only hold with a certain probability, and how reliable and repeatable these estimates are depends on the probabilistic parameters, for example the batch size. As pointed out before, we observed that our bounds always hold when choosing the sample set size $m$ on the order of 10000, and in this case, it should be safe to use ACGP for model comparison.",
" Thank you for your very detailed review and the time and effort you put into it. Also, thank you for pointing out a mathematical subtlety that people in the GP community might find confusing; we are applying a frequentist's perspective to approximate numerical quantities for a Bayesian---but no longer being Bayesian. In the Bayesian perspective, it indeed makes no sense to assume that inputs and targets are jointly independent. In the frequentist literature, though, this assumption is common and also sensible, because all probabilities are conditioned implicitly on the true underlying function (which may not even be part of the RKHS). Since that function is a deterministic quantity, it is dropped quietly from the conditioned variables. We will add this clarification to our introduction.\n\nHoping that we convinced you that the i.i.d. assumption is admissible, please note that your counter-example violates it. Via the i.i.d. assumption, we make transparent that our algorithm can fail when applied to datasets with a peculiar permutation or ordering (time-series being an example). We sacrifice guarding against these cases in exchange for independence of $N$---this is where the speed-up is coming from. We provide an analysis with the proof of Theorems 2 and 3. As you suggested, in our experiments, we indeed shuffle our datasets and report results corresponding to five independent permutations.\n\nYou are correct that $y_{s:N}$ is indeed a deterministic upper bound to the remaining determinant. The critical observation is that we do not bound this quantity but the expectation of that term conditioned on the first $s$ inputs. Regarding the lower bound: that there is hope has definitely been shown in the peer-reviewed CGLB paper (Artemev et al., 2021). The proofs for our claims may appear somewhat lengthy, but note that we provide a more accessible proof sketch in Appendix D.",
" This paper proposes speeding up Gaussian Process inference by estimating the log marginal likelihood with a subset of the data inexpensively chosen on the fly. \n\nThe core idea is to bound the full log marginal likelihood by two quantities $L$ and $U$ that may be estimated inexpensively with a small subset of the data grown incrementally, to stop growing the subset when the error $|U-L|$ is small enough, and to use as estimate for the log marginal likelihood $(U+L)/2$. While the core idea above is interesting, the implementation of this idea by this paper has a few fundamental limitations, both conceptual and theoretical. \n\n\n## Conceptual Limitations\n\n**The first $M$ points might not be representative of the input space or even the full dataset**\n\nThe fundamental idea underpinning the stopping strategy proposed in this paper is that, if we estimate the log marginal likelihood of $M$ data points selected sequentially, and we realize that the last $s$ points we added were somehow redundant in the estimation of the log marginal likelihood of the $M$ points, then we can conclude that the first $(M-s)$ points selected are good enough to compute the log marginal likelihood of ***any*** $n \\gg M$ points. \n\nThis is clearly not the case!\n\n**Counter-Example:** As a simple counter-example, consider the following toy 1D regression problem: $x_i = i\\frac{2\\pi}{n}, ~ i \\in [1, n]$ with $n=1000000$, and $y_i = \\sin(x_i)+\\epsilon_i$ where $\\epsilon_i$ are i.i.d. Gaussian noise terms with standard deviation $1/100$. Basically, a noisy sine function.\n\nClearly, inputs $x_i$ are sorted and, no matter the interval $[0, \\alpha]$ you consider, you do not need nearly as much as $1000000\\frac{\\alpha}{2\\pi}$ points to represent the sine function! You don't even need $100\\frac{\\alpha}{2\\pi}$ points! \n\nFor most $M$ you'll take, you'll find that points $(x_{M-s}, \\dots, x_M)$ are redundant (too many points relative to how fast the function is changing) given $(x_1, \\dots, x_{M-s})$. \n\nYet, no matter $x_M \\in [0, 2\\pi)$, knowing $(y_1, x_1), \\dots, (y_M, x_M)$ is clearly not sufficient to estimate the full log marginal likelihood for the simple reason that you have no clue what the latent function can possibly look like beyond $x_M$. \n\nIs the latent function smooth, is it piecewise smooth, does it undergo changepoints after $x_M$, etc. All these scenarios would yield drastically different full marginal likelihood and you simply cannot tell them apart simply from observing $(y_1, x_1), \\dots, (y_M, x_M)$.\n\nIn general, because points $(y_{M-s}, x_{M-s}), \\dots, (y_M, x_M)$ are redundant relative to points $(y_1, x_1), \\dots, (y_{M-s}, x_{M-s})$ does not mean that $(y_1, x_1), \\dots, (y_{M-s}, x_{M-s})$ are representative of the latent function on the **whole input domain**, which is what you need.\n\n**The main takeaway here is that the order in which you consider your inputs $x_i$ matters! At the very least, your dataset needs to be shuffled, you need to revise your algorithm to make sure it is not exposed to the peculiarities of a specific permutation, and you need to provide some analysis on the effect of this random shuffling on overall performance.**\n\n\n## Theoretical Limitations\n\nThe paper also makes a few theoretical mistakes. \n\n\n**Data Points Are Not i.i.d.; Theorem 2 is Questionable.**\n\nFirst, it is said a couple of times that data $(y_i, x_i)$ are assumed **i.i.d.** (e.g. Page 4, Line 125, and Theorem 2). That's incorrect and (obviously) never the case in Gaussian Process modeling. If that was the case, the Gram matrix $K$ would be diagonal and you would not have any issue estimating the marginal likelihood. \n\n$(y_i, x_i)$ are usually assumed **i.i.d. conditional on the latent function**! In other words, it is the noise terms affecting observations that are assumed to be i.i.d., not the observations.\n\nIt would appear that the main result, Theorem 2, relies on this assumption.\n\n\n**The Upper Bound $\\mathcal{U}_D$ is questionable**\n\nNote that the log determinant of a positive definite matrix is always smaller or equal to the log determinant of its diagonal. \n\nHint: Relate the log-det of a psd matrix to the entropy of a multivariate Gaussian, note that the entropy of a multivariate distribution is the sum of the entropies of its marginals and the entropy of its copula, and note that the entropy of a copula is always non-positive (the independence copula is maximum-entropy and has entropy $0$).\n\nHence, in Equation (6), the term $(N-s)\\mu_D$ should really be the sum of the posterior variances of $y_{s:N}$. No need for the incorrect i.i.d. assumption on $(y_i, x_i)$.\n\n\n**The Lower Bound $\\mathcal{L}_D$ is questionable**\n\nFundamentally, is there really any hope to lower-bound the log-det of the conditional covariance matrix? Depending on how correlated any two unobserved outputs are unconditionally, this conditional covariance matrix can have as low a determinant as possible, irrespective of any observations we have made. \n\nIf any two unobserved outputs are perfectly correlated, which depends solely on the kernel, then the log-det of the conditional covariance matrix will be $-\\infty$, no matter what outputs $y_{:s}$ were observed! Please address the concerns in bold above. Additionally, what are the fundamental differences between what you are trying to achieve and inducing point selection? Yes.",
" This paper proposed a generic marginal likelihood approximation in Gaussian process models by using a subset of the data. A logged marginal likelihood is written as a summation of recursive logged predictive densities. Although a predictive density is analytically calculated, the computation associated with a covariance matrix becomes expensive easily with a number of data points. \nthe data is divided into two folders, a processed A (size M) and a remaining B (size N-M). A logged marginal likelihood has two terms, a determinant and a quadrative term. Instead of taking a recursive sum, a logged marginal likelihood is bounded using a subset of the data (say, s-samples in A folder) and the logged marginal likelihood is approximated by the median of a lower and an upper bounds.\n\nThe paper presented the bound and proof and numerical simulation study shows that a tight bound can be achieved using some subset data, s.\n The marginal likelihood is a model evidence that it is an important estimation, particularly for model comparison. Any reliable evidence approximation will receive lots of attention. \n\nThere are some uncertainties and unclearness about the method.\n\na)\tBounds (both a determinant and a quadratic term) were formed by for by a posterior for s data in the A folder and a weighted/adjusted prediction for the remining data in A (M-s). Such argument makes sense for one prediction, p(x_{s+1}|x_{s}) and I do not get why this is a reasonable formation for a summation of recursive predictions.\nb)\tThe approximation will be sensitive to M (size of the A folder) and s (subset used for a posterior). It is clear that as s approaches N, we will get the exact evidence. I feel the paper did not investigate enough to present the approximation quality with s and M both theoretically and numerically.\nc)\tI don’t think authors did not provide any guide on how to choose s and M. In the simulation study, only M values are shown, no s values.\nd)\tCould authors comment on the behavior of bounds with s/M for different model dimensions? \ne)\tFrom the practical point of view, s is likely to be chosen with a tight bound using some threshold. When we compare models, it is crucial that we get the same comparison result using evidence approximations. Could authors comment on whether your method will be reliable in this context? \n Please see Strengths and Weakness. yes. ",
" The paper provides a stopping criterion for random data subsampling (without replacement) for Gaussian process inference by computing upper and lower bounds to the final log marginal likelihood.\nThese bounds are shown to hold on expectation, which I understand to be taken over the randomness of the subsampling order.\nImportantly, the bounds can be computed as a by-product from computing the Cholesky decomposition of the kernel matrix that would need to be computed anyway by the standard exact inference algorithm. That means that one can determine the random subset without computational overhead (at least in terms of asymptotic order of growth) and on stopping one already ends up with the Cholesky decomposition of the kernel matrix with respect to the data subset.\nIn particular, the authors use the standard decomposition of the log marginal likelihood into a quadratic and log determinant term, and develop bounds for both that can be computed efficiently from the partial Cholesky decompositions.\nIn contrast to previous methods for large scale GP inference, e.g., those that construct representative subsets of inputs or pseudo-inputs, the presented methods can be computed without seeing all the input data. This makes it appealing for very large scale datasets where small samples can be sufficient to approximately determine the posterior.\nAn experimental evaluation of the idea is provided that shows that the two individual bound pairs (log determinant and quadratic) can perform rather differently compared to a related approach that uses conjugate gradients. In particular the bound on the log determinant seems to be more efficient.\n Strengths:\n* The idea is to my knowledge novel and appealing because of its principal independence of the overall sample size and its mathematical aesthetics as a function of the partial Cholesky decompositions. One could imagine these bounds to be useful in various contexts.\n* Also the paper is very well written and presents technically challenging ideas in an accessible manner in a small amount of space.\n \nWeaknesses:\n* The bounds are only shown to hold on expectation and, while the derivation of a full probabilistic guarantee might be out-of-scope for this work, there seem to also be no experiments that test in a representative range of settings the probability with which the bounds hold in practice.\n* Generally, the tested settings seem rather limited with only one dataset considered in the main text and two more in the appendix. Given that the theory is still in early stages a little more empirical testing would be useful.\nIn particular, since the idea seems so appealing, one would expect to see some synthetic settings that demonstrate that the proposed method can outperform existing approximate and exact inference methods by a wide margin. However, such settings are not developed.\n* Also for the datasets that are considered (at least in the main text), per dataset we only see the performance of the bounds of one component of the log marginal likelihood. While the analysis on this level of detail is certainly interesting, the overall performance would depend on the interplay of both bounds per dataset.\n What is the reason for the peak in the log marginal likelihood in the second hyperparameter optimisation experiment? I might not understand the experimental setup here. Should these curves not be monotonically decreasing?\n The authors strive to honestly discuss the limitations of the work. Two things that should probably be considered in more detail:\n1. While the computation of the stopping criterion does not affect the asymptotic order of growth of the computational complexity there seems to be some notable overhead that leads the method to be computationally outperformed by exact inference in some cases. This overhead should be discussed in more detail.\n2. Assumption 1 is merely claimed to “appear empirically true”. What is the evidence for that? More discussion or experiments would be useful.\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2022_2EBn01PJh17",
"i6KDnDEaiw",
"ersnQf7N9Vo",
"XxkndtIIUzD",
"nyVK0s2jFOg",
"L11pV5T9H1v",
"-xrhce6ozAp",
"l186Q1BbZ7m",
"nips_2022_2EBn01PJh17",
"nips_2022_2EBn01PJh17",
"nips_2022_2EBn01PJh17"
] |
nips_2022_xijYyYFlRIf | GAUDI: A Neural Architect for Immersive 3D Scene Generation | We introduce GAUDI, a generative model capable of capturing the distribution of complex and realistic 3D scenes that can be rendered immersively from a moving camera. We tackle this challenging problem with a scalable yet powerful approach, where we first optimize a latent representation that disentangles radiance fields and camera poses. This latent representation is then used to learn a generative model that enables both unconditional and conditional generation of 3D scenes. Our model generalizes previous works that focus on single objects by removing the assumption that the camera pose distribution can be shared across samples. We show that GAUDI obtains state-of-the-art performance in the unconditional generative setting across multiple datasets and allows for conditional generation of 3D scenes given conditioning variables like sparse image observations or text that describes the scene. | Accept | This paper proposes a framework to learn disentangled latent representation of radiance field and camera pose from trajectories of 3D scenes. The denoising diffusion probabilistic model can be further trained on the extracted latent representation for a conditioned or unconditioned generation. Experiments are conducted to validate the performance of the proposed method. The paper receives a total of 4 reviews. All reviewers lean to (borderline/weakly) accept the paper because of the novelty of the tasks, even though most of them raised concern about the view-consistency problem. AC recommends accepting the paper because the task of generating egocentric video, with an option of text-prompt input, is a novel and interesting direction, and this paper's merit outweighs its flaws. AC urges the authors to improve their paper by taking into account all the feedback from the reviewers. | train | [
"manQuoaDDLq",
"H03vtaX5VMx",
"OOQXCuWAhjO",
"zILC_x1xKkd",
"WclHdFc2RN",
"aaKIyoe62G",
"B2sqmXoQy2Z",
"WFK36BoFvAJ",
"EOYsFy9IDb",
"xQ3yNd-IEg",
"S4ZmkyGPpVC",
"hoYDVrFy0Vm",
"K3qL-Gj05QT",
"jSHGkHSBoxs",
"Hq8i3-o_gtw",
"wS5C2Ee7Q13"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for engaging in discussions with us and for their comments. We address them in the following:\n\n* “I am a little confused by the response the training set of trajectories are scene agnostic. If my understanding is correct, during training, each scene has a different set of camera trajectories specifically taylored to the scene, and these trajectories are modeled jointly with the scene aka scene-dependent. “\n * We apologize for this misunderstanding. We meant that even though the trajectories depend on the layout of each 3D scene, when we train the unconditional model the trajectories cannot be mapped to a specific scene after being collected (eg. there’s no additional one-hot label that maps each trajectory to a specific 3D scene). Furthermore, our training sets for all datasets contain multiple trajectories for each 3D scene. \n\n\n* “Finally, I agree that there is a potential that some imperfections of the proposed method, such as view-inconsistency, can be improved by the techniques from orthogonal works. However, since the proposed method is advertised as a 3D generative model, view-consistency is is a very important component. This is a concern shared by other reviewers as well (gUke; also related to TNJD's suggestion on using video metrics for evaluation)”\n * We thank the reviewer for this comment. We believe that finding suitable metrics for measuring consistency is going to be key for future work on scene generative models. We want to point out two observations:\n * We could replace the convolutional upsampling network with a vanilla radiance field model that preserves multi-view consistency by design. This would be a drop-in replacement in the GAUDI framework, which will guarantee scene consistency at the cost of much higher training and inference runtimes. We opted for a convolutional upsampling network to reduce the computation cost. In follow up work on GAUDI we will focus on the trade-offs between training/inference costs and scene consistency.\n * We have computed initial FVD scores for VLN-CE and will continue to think about suitable consistency metrics for the final version of the appendix. We report the FVD score on 592 trajectories in VLN-CE which is 143.53. To put this number into context, DriveGAN[*1] which is a recent model for video prediction obtains an FVD score of 360.00 on Gibson (a dataset of indoor scenes that is very similar to VLN-CE). Finally, computing the FVD score from the GT training trajectories to themselves results in a score of 43.09, this number serves as a lower bound in terms of FVD score that GAUDI could obtain. The FVD score of 143.53 indicates that GAUDi is able to capture multi-view/temporal consistency as shown in the qualitative video samples provided in the appendix.\n * [*1] Kim, Seung Wook, et al. \"Drivegan: Towards a controllable high-quality neural simulation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.",
" We would like to thank the reviewer for engaging in discussions with us and for their comments. We address them in the following:\n\n* “But I would like to stress that multiview consistency(or temporal consistency) is a major part of the problem, which is not being addressed in this paper. It is an important metric for evaluating the generation quality and should be somehow addressed in a comprehensive solution.”\n * We thank the reviewer for the comment. We just want to point out that our qualitative results show consistent scenes. We do believe that finding a suitable metric to measure this consistency is important future work. As a first step we provide FVD results for VLN-CE and will continue to work on this either for the final version of the appendix or future work.\n * We report the FVD score on 592 trajectories in VLN-CE which is 143.53. To put this number into context, DriveGAN[*1] a recent model for video prediction obtains an FVD score of 360.00 on Gibson (a dataset of indoor scenes that is very similar to VLN-CE). Finally, computing the FVD score from the GT training trajectories to themselves results in a score of 43.09, this number serves as a lower bound in terms of FVD score that a model could obtain. The FVD score of 143.53 indicates that GAUDi is able to capture multi-view/temporal consistency as shown in the qualitative video samples provided in the appendix.\n * [*1] Kim, Seung Wook, et al. \"Drivegan: Towards a controllable high-quality neural simulation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n",
" \nWe would like to thank the reviewer for engaging in discussions with us and for their comments. We address them in the following: \n\n* “I think showing interploation and generative results can to some extent demonstrate the model's capability, but that may not be enough.”\n\n * We thank the reviewer for their comment and meant to just point out that the only way an unconditional generative model (eg. a model that has an objective of modeling p(scenes) ) can generalize to novel scenes is if it learns a good interpolation of the training samples (ie. encoding the appropriate factors of variation). In this direction, we want to highlight our results of generalization beyond the training set via interpolation in Fig 5 and appendix (Fig 11 and video). A deeper analysis like the ones performed for 2D image generative models would likely require a significantly larger training set to fill out the 3D space of variations of indoor scenes. We agree that this is a good direction for follow up works to GAUDI. In addition, we predict that the generalization/interpolation results obtained by GAUDI will improve as larger public 3D scene datasets become available. We believe this to be the case given two observations: (i) the empirical results observed on 2D models (e.g., DALL-E, Imagen, etc.) trained on large datasets, and (ii) our encouraging interpolations results.\n* “The second concern is about the trajectory-level metrcs. As the authors currently generate a trajectory instead of an single image, I think they should also show some quantitative results upon the quality of the whole sequences”\n * We agree with the reviewer that this point is indeed interesting. In the last few days we have been thinking about possible ways of quantitatively evaluating the consistency in the trajectory, which is clearly visible in our qualitative results as pointed by the reviewer. Initially, our observations were the following: (i) None of the previous approaches (GRAF, pi-GAN, GSN) predict camera pose trajectories, which makes them non comparable with GAUDI in terms of FVD. (ii) Randomly cropping videos could introduce biases in FVD computation that might be encoded in future work that uses them as baseline.\n * Following the reviewers recommendation we report the FVD score on 592 trajectories in VLN-CE which is 143.53. To put this number into context, DriveGAN[*4] a recent model for video prediction obtains an FVD score of 360.00 on Gibson (a dataset of indoor scenes that is very similar to VLN-CE). Computing the FVD score from the GT training trajectories to themselves results in a score of 43.09, this number serves as a lower bound in terms of FVD score that a model could obtain.\n\n\n\n * In order to calculate the FVD score, we obtain clips of 20 equidistant frames from 592 trajectories predicted by our model as well as 592 randomly sampled ground truth videos. We then use the open repo: https://github.com/google-research/google-research/tree/master/frechet_video_distance to calculate the FVD score. Based on recent papers [*1,*2,*3,*4], the FVD score of 143.53 indicates that GAUDi is able to capture multi-view/temporal consistency as shown in the qualitative video samples provided in the appendix. We will keep thinking about different ways of qualitatively evaluating consistency and update the final appendix if we find a more suitable approach.\n * [*1] Unterthiner, Thomas, et al. \"Towards accurate generative models of video: A new metric & challenges.\" arXiv preprint arXiv:1812.01717 (2018).\n * [*2] Castrejon, Lluis, Nicolas Ballas, and Aaron Courville. \"Improved conditional vrnns for video prediction.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.\n * [*3] Menapace, Willi, et al. \"Playable video generation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n * [*4] Kim, Seung Wook, et al. \"Drivegan: Towards a controllable high-quality neural simulation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n",
" After reading all the comments and responses, I'm inclined to maintain my original rating as borderline accept.\nI do believe, that this paper's merit outweights its flaws: The task of generating fly-through videos, with an option of text-prompt input, is a novel and interesting direction. The results shown in the paper are not perfect, but I do believe they deserve exposure.\n\nBut I would like to stress that multiview consistency(or temporal consistency) is a major part of the problem, which is not being addressed in this paper. It is an important metric for evaluating the generation quality and should be somehow addressed in a comprehensive solution.\n\nAlso, as the method's generation quality is directly correlated with camera pose and depth qualities, it might be hard to scale up to more natural datasets. Quality camera pose and depth maps are expensive to capture and are unlikly to scale up in the upcoming future. A more direct assessment on this issue would be more informative to the reader.\n\nThat being said, I do think those drawbacks are acceptable comparing to the merit of this paper. Hence I would maintain my boarderline accept rating.",
" Dear authors,\n\nFirst, I would like to thank you for clarifying that the pi-GAN and GRAF baselines are both trained with ground truth depth. I am also glad to see that you have included this information in the revision. This is helpful as the settings used are not the standard settings in the original works.\n\nI am a little confused by the response *the training set of trajectories are scene agnostic*. If my understanding is correct, during training, each scene has a different set of camera trajectories specifically taylored to the scene, and these trajectories are modeled jointly with the scene aka *scene-dependent*. This is also mentioned in Supp. L600.\n\nAs for the poor performance of random pose synthesis (Supp. Table 5), the author's theory of non-valid camera poses is partially convincing. However, it will be even more helpful if some visual evidences can be provided.\n\nI agree with the author that $\\beta$ can potentially regularize the latent space. Nevertheless, its effect on the interpolation performance is never ablated.\n\nFinally, I agree that there is a potential that some imperfections of the proposed method, such as view-inconsistency, can be improved by the techniques from orthogonal works. However, since the proposed method is advertised as a 3D generative model, view-consistency is is a very important component. This is a concern shared by other reviewers as well (gUke; also related to TNJD's suggestion on using video metrics for evaluation).\n\nA suggestion: consider highlighting the updates in the revision to make the differences easier to spot.\n\nBest,\nKBe3",
" I would like to thank the authors for their responses and extra efforts in the rebuttal phase. It solves some of my concerns, but not all.\n\nFirst, I'm afraid I can not agree with the authors that because their model is a generative model, it does not need to be justified on reconstructing novel scenes that follows the same distribution as the training data but are not in the training set. Just the opposite, for a generative model that successfully models an underlying distribution of different scenes, it should be able to reconstruct a new scene that follows the same distribution but hasn't been seen before. An extreme failure case might be that the model just memorizes all the training samples. In fact, generative models such as VAE[1] or EBM[2] usually report the metrics (bpd, NLL, mse) on a leave-out testing dataset instead of the training set. I think showing interploation and generative results can to some extent demonstrate the model's capability, but that may not be enough.\n\nThe second concern is about the trajectory-level metrcs. As the authors currently generate a trajectory instead of an single image, I think they should also show some quantitative results upon the quality of the whole sequences. This can demonstrate the consistency among the whole sequence and the performance of pose decoder (e.g. whether the pose decoder provides valid movements, and whether the movements form a continous trajectory or contain sudden pose jump). Although the authors show some successful qualitative results, quantitative results are more convincing to demonstrate whether these problems exist (it does not must be FVD, but I think some trajectory level judgements might be needed). Also, I see the authors say that they can not compute FVD because the sequences' lengths are different. But I'm curious why they can not just generate (or crop after generation) a fixed length of sequences and calculate the score. \n\n\n[1] Vahdat, Arash, and Jan Kautz. \"NVAE: A deep hierarchical variational autoencoder.\" Advances in Neural Information Processing Systems 33 (2020): 19667-19679.\n[2] Pang, Bo, et al. \"Learning latent space energy-based prior model.\" Advances in Neural Information Processing Systems 33 (2020): 21994-22008.",
" We thank reviewer KEe3 for acknowledging receiving our rebuttal addressing reviewers comments and feedback. We are happy to discuss any additional points if needed.",
" We thank the reviewers for all their thoughtful comments and insights, which have helped clarifying and strengthening the submission. Reviewers have highlighted the following strengths:\n\n* si6w: “The authors tackle the problem of 3D complex scene generation. For learning priors from a large amount of indoor scene observation trajectories, the authors propose to first learn a disentangled representation and then learn a generative model over the latent representations. The proposed method is technically sound.”\n\n* KBe3: “The paper shows that the encoder-free (auto-decoder) training objective used in DeepSDF [29] is also applicable to neural radiance fields. This significantly simplifies the training of a generalizable neural radiance field.”\n* gUke: “The authors proposed a DDPM-based prior sampler together with trajectory and scene generation, which is a hard problem to tackel. The use of DDPM is well justified, as simple priors are limited in modeling different variations of camera poses and scene contents.”\n* KBe3: “Language-driven neural scene synthesis is a challenging problem which can lead to interesting applications.”\n* gUke: “The paper is well written and easy to follow.”\n\nIn addition, we have included explanations and experiments in our revised version of the paper and appendix to tackle reviewers questions and comments. The summary of the changes is as follows: \n\n* Reviewer: KBe3 L33-39 (clarification): explanation of dependence between scenes and valid camera poses.\n* Reviewer: gUke L126-128 and Eq. 1 (clarification): dropped orientation conditioning for radiance field.\n* Reviewer: KBe3 L139-145 (clarification): clarifying the meaning of $\\beta$.\n* Reviewer: KBe3 L222 (clarification): all baselines use GT depth during training.\n* Reviewer: si6w L237-246 (clarification): details on conditional inference problems. \n* Reviewer: TNJD L480-486 (limitation): how to model infinitely big or “boundless” scenes.\n* Reviewer: gUke L577-580 (clarification): inference time.\n* Reviewer: KBe3/TNJD Appendix F (addition): experiments to test GAUDI for arbitrary viewpoint synthesis.\n",
" * “The comparison with previous methods, shown in Tab. 2 is not a fair comparison”\n * All methods in Tab. 2 were trained using GT depth, following the protocol established in GSN [6]. This means that GT depth is concatenated to rgb along the channel dimension and fed to the discriminator (we explain this in L222 of the revised submission). In addition, in GAUDI we don’t assume knowledge about what images come from the same scene, the training set of trajectories are scene agnostic. Finally, we want to highlight that the camera poses for ARKit [1] dataset are estimated via an off-the-shelf SfM approach and GAUDI still outperforms previous approaches in this setting as show in Tab 2.\n* “The proposed method puts scene-dependent constrains on the camera pose, limiting its use in arbitrary viewpoint synthesis”\n * We thank the reviewer for bringing up this point, which we have clarified in the revision L33-39. Notably, scene-dependent constraints on the camera pose are needed to model unconstrained scenes with different layouts, like the ones we show in Fig. 3(b). Note how each scene defines different areas of navigable space (different dark dashed areas) where cameras can be placed. This is in opposition to generative models trained on datasets of single objects, like Shapenet, where the distribution of camera poses can be defined independently of the object, (eg. all cameras on the sphere are “valid cameras” independently of the object in Shapenet). \n * In addition, we designed an experiment to show the performance in the arbitrary viewpoint synthesis setting. In this experiment we take a model trained on VLN-CE [21] and perturbed the camera poses sampled from the prior with uniform noise both in translation (up to 50 cm) and orientation (up to 20 degrees). We observe that while that there’s an increase in FID metrics as we add noise, GAUDI still generates realistic images, outperforming previous approaches by a wide margin. This new experiment is included in the section F of the revised appendix.\n * | | No Noise | 25 cm + 10 deg | 50 cm + 20 deg |\n | -------------- | -------------- | -------------- | -------------- |\n |FID | 18.52 | 20.38 | 25.9 |\n |SwaV-FID | 3.63 | 4.01 | 4.68 |\n * In Tab. 5 \"GAUDI w. Random Pose\", we show the result of completely breaking the dependence between scenes and valid camera poses, which results in a steep increase in FID as expected since views from non-valid camera poses are often rendered (eg. camera poses that are placed outside of the navigable area of a given scene). \n* “The benefit of adding perturbation β to the latent variables (L138) is not justified by the experiment.”\n * We thank the reviewer for bringing up this point, which we have incorporated in L139-145 in the revised version. \\beta can be interpreted as a weight that controls the smoothness of the distribution of latents. With \\beta>0 we enforce a smoothing of the latent distribution enabling interpolation (as shown in Fig. 5 and appendix) at the cost of sacrificing fidelity, similar to comparing the latent space and reconstructions of AEs vs. VAEs for images.\n* “There appears to be strong view inconsistencies in the interpolation video”\n * Interpolating radiance fields is a complex task since both geometry and appearance of the scene have to change jointly and consistently. In addition, as opposed to traditional stimuli like faces, scenes do not have a canonical orientation, which makes interpolation all the more challenging. To the best of our knowledge GAUDI is the first approach to show reasonable interpolation of scene-level radiance fields. We expect that as the amount of training data for 3D scenes increases these artifacts will be mitigated. Future work can consider tricks to improve the multi-view consistency [*1]\n* “Memorization vs novel generation in GANs vs latent DDPMs”\n * This is a fundamental question of generative modeling (for any data domain), the capacity of generative models to generate novel samples vs memorizing is an open problem. From a training objective perspective a model that can perfectly memorize the training data distribution is a perfect generative model. During inference we rely on the functional iductive bias of the model to generate novel samples via non-linear interpolation of the training data manifold. Approaches like [*2, *3] show that training DDPMs in latent space for 2D images outperform GANs. We observed the same in GAUDI: our interpolated scenes are meaningful and the model outperforms previous GAN-based approaches. \n\n[*1] Gu, Jiatao, et al. \"Stylenerf: A style-based 3d-aware generator for high-resolution image synthesis.\" arXiv:2110.08985\n\n[*2] Rombach, Robin, et al. \"High-resolution image synthesis with latent diffusion models.\" CVPR 2022.\n\n[*3] Vahdat, et al. \"Score-based generative modeling in latent space.\" NeurIPS 2021.",
" * “As mentioned above, how robust is the method to camera pose registration errors? it seems the ARkit results is worse than the results from Replica.“\n * While our model still outperforms all the previous approaches on ARKit we agree with the reviewer that ARKit is the most challenging dataset due to a few factors: it contains real high-resolution scenes and it doesn’t provide ground truth depth or camera poses. In future work, we could backprop gradients from image reconstruction into the camera pose decoder network to fine-tune results [*1]. This can help mitigate some of the issues in ARKit. \n* “How generalizable it is to use a DDPM to sample the latents? Would it still apply to other generative tasks where simple priors fail?”\n * Learning DDPMs in latent space is a very flexible approach for generative modeling [*3] specially for tasks where simple prior fails or are hard to encode like in radiance fields. \n* “How robust is the method to succesfully train?”\n * GAUDI is extremely simple to train, this is due to the fact that our model boils down to a series of reconstruction objectives. First, we encode radiance fields and camera poses into latents by minimizing a reconstruction loss and then we train a DDPM model on this latent space, which again is series of denoising auto-encoders. Notably, when training the prior we didn't do grid search of hyper-parameters other than picking a fixed learning rate and a model size that can fit into a single Nvidia A100 GPU. GAUDI is much more robust than previous approach that adopt an adversarial formulation.\n* “It seems that the generated results lack temporal coherency. Though that's quite common for NeRF based generators, is it possible to visualize the generated geometry/depth maps as well? It seems the NeRF model is view-dependent, would a view independent model be better at consistency?”\n * We thank the reviewer for this question. We believe the reviewer is referring to multi-view consistency as opposed to temporal consistency (since the scene is static and only the camera moves around). The view inconsistency artifacts in the results experiment could be a result of performing the volumetric rendering in feature space and then use a convnet to upsample the feature to a desired resolution. We adopt this commonly used trick from previous work [28, 6] to speed up training on large-scale datasets. In future work, we can consider tricks to improve the multi-view consistency as in StyleNerf [*2] to improve this kind of artifacts. Due to the fact that volumetric rendering happens in low-res feature space as in [28, 6], the resolution of the predicted depth is too small to be meaningful. Finally, by default we do remove the view dependent conditioning of NeRF in this paper. This is clarified in the revised manuscript in L126-127 and in Eq. (1).\n* Finally, we would like to kindly remind the reviewer that discussions on limitations and societal impact are provided in the supplementary material (as allowed in the submission policy). \n\n[*1] Yen-Chen, Lin, et al. \"inerf: Inverting neural radiance fields for pose estimation.\" (IROS). IEEE, 2021.\n\n[*2] Gu, Jiatao, et al. \"Stylenerf: A style-based 3d-aware generator for high-resolution image synthesis.\" arXiv preprint arXiv:2110.08985 (2021).\n\n[*3] Rombach, Robin, et al. \"High-resolution image synthesis with latent diffusion models.\" Proceedings of the IEEE/CVF \n",
" * “The technical part is not very easy to follow, especially for the conditional generation part. It would be better if more details can be given for better clarity.”\n * We thank the reviewer for this comment. We have clarified the conditional inference section in L237-246 of the revised paper and referenced detailed sections in the appendix. \n\n* “Since the results are demonstrated via short rendered videos under sampled camera path, it’s not that friendly to observe the diversity of the generated scenes. For example, the provided results for Replica in the supplementary video seem very similar. “\n * In the supplementary material we provided videos with 64 samples generated from the prior for each dataset (Vizdoom, Replica, VLN-CE and ARKit). Indeed, since Replica only contains 18 scenes there are multiple trajectories that are sampled on the same underlying scene. Note that this is not the case for bigger datasets like VLN-CE or ARKit, for which videos are also provided in the supplementary. \n* “Is the proposed model able to generate longer camera paths (e.g, camera paths that are longer than the provided ones?)”\n * We thank the reviewer for this comment. Since the camera pose decoder is queried with a “temporal embedding” $s$ that is continuous in the [-1, 1] interval, our model can perform super-resolution of the camera paths, generating paths that are much longer than the ones seen during training. In addition, the training procedure for encoding trajectories is not limited to a specific length and can deal with arbitrary length trajectories.\n* “Can the proposed model guarantee the continuity and validness of the sampled camera poses where a new latent representation is sampled from the Gaussian distribution? If so, how does the proposed model achieve this? Is it achieved by the noise perturbations in latent optimization and the camera pose normalization?“\n * The generative model learns the joint distribution of $z_scene$ and $z_pose$ latents . This means that every time we sample from the prior we get both a $z_scene$ and a $z_pose$ that we can decode into a continuous camera trajectory (via the combination of temporal embedding $s$ and $z_pose$) and use it to render RGB frames from the radiance field. The validity and continuity of the camera camera poses is learned from the training data. \n* “How long does it exactly take to generate a new scene?“\n * We thank the reviewer for this comment, which have added to the revision of our manuscript L577-580. Inference of the radiance field is amortize across the whole scene. This means that we only need to sample from the prior once and then we can render as many views of the scene as needed. Sampling from the prior takes 1.52s. Once we obtain the scene embedding, per frame rendering takes 1.6s at 128x128 resolution. These results are obtained on a single A100 NVIDIA GPU. \n* “For text-based conditional generation, there are many artifacts observed within the provided results, what is the possible reason for this phenomenon?“\n * GAUDI is the first model to tackle text-based generation of unconstrained 3D scenes and as a result is not as fine-tuned as recent generative models for images like DALLE-2 or Imagen. Additionally, the size of the available 3d scene datasets is small relative to the data used to train DALLE-2 or Imagen. We expect this artifacts to be mitigated as the amount of training data increases. This is for two reasons: (i) a lot more 3D scene variety that can fill in the interpolation manifold. (ii) a larger amount of paired text-scene data will induce better mappings from text to scene. ",
" * “Novel view synthesis performance: In 4.2 the authors show the reconstruction results... But how well does the model do for novel view synthesis?“\n\n * We thank the reviewer for bringing up this issue. We run an experiment to provide empirical evidence to support the fact the model doesn’t overfit to the camera poses seeing during training and that it allows for novel viewpoint synthesis. In this experiment tackle a model trained on VLN-CE [21] we perturb predicted camera poses with uniform noise both in translation (up to 50 cm) and orientation (up to 20 deg). We observe that while that there’s a small increase in FID metrics as we add noise, GAUDI still generates realistic images, outperforming previous approaches. We have included these analysis in section F of the appendix.\n | | No Noise | 25 cm + 10 deg | 50 cm + 20 deg |\n | -------------- | -------------- | -------------- | -------------- |\n |FID | 18.52 | 20.38 | 25.9 |\n |SwaV-FID | 3.63 | 4.01 | 4.68 |\n * Additionally, in Sect. 4.5.2 we evaluate the performance of GAUDI for conditional inference of a scene given an image. This setting is the closest to novel view synthesis in a probabilistic formulation. We note that at its core view synthesis is not a deterministic problem but rather a stochastic one (eg. given an image, multiple completions of the scene are possible). We show that given an image, GAUDI can generate multiple trajectories that are consistent with the same underlying 3D scene. We encourage the reviewer to check the conditional inference results video in the appendix under “./cond_samples/cond_gen_raw_slides.mp4”.\n* “Test on novel objects: Similar to 1, how well can the model do to reconstruct a novel scene that has not be seen during training?”\n * We thank the reviwer for this comment. GAUDI is a generative model, and as opposed to a 3d reconstruction method, the goal of GAUDI is to approximate the empirical distribution of scenes in the training set. In order to generate novel samples, generative models learn high complex non-linear interpolations of samples in the training set. In our interpolation results in Fig. 5 and in the appendix we show that GAUDI is indeed able to generate novel scenes.\n* “For unconditional generation, the authors mainly demonstrate their model's ability through FID score, which is based on the quality of a single image. However, the model proposed here actually generates a trajectory in the scene. Would it be possible to compare the sample quality on the trajectory level (which, besides the quality of each frame, also consider the consistency among each frames)? A reference metrics here might be the FVD [1] score.”\n * We thank the reviewer for pointing out using FVD as an additional metric to compute sample quality as a trajectory level. One consideration is that our trajectories are generally of different length (eg. different number of frames), which breaks the requirement for FVD and makes it not directly applicable. We agree with the reviewer that trajectory level metrics can be an interesting metric to study.\n* “For conditional generation, why there are no baselines?”\n * For conditional generation of 3D scenes there are no comparable baselines because GAUDI, as far as we know, is the first model to tackle this problem. \n* “What does the denoising recontrution objective mean in L133?\"\n * We agree with the reviewer that the terminology used in L133 \"denoising recontrution objective\" can cause confusion with the objective in DDPMs. The \"denoising reconstruction objective\" comes from the fact that while optimizing latents we add noise to the latents as explained in L137, this is completely separate from the next stage where we learn the generative model. We have gotten rid of the \"denoising\" term in L134 in the revised version of the manuscript.\n* “The current model uses a single tri-plane representation for the whole scene, which means that the movements of the agent are limited to a predefined area. Then what if the agent moves outside the boundary”\n * We thank the reviewer for pointing this out, which we will include in our discussion of limitations. Indeed, the tri-plane representation defines the volume of physical space on which the radiance field is defined. Note that the cost of increasing the resolution of this representation scales quadratically with the volume of space, which already makes it a good candidate to model big indoor space with multiple rooms. To tackle the case in which one wants to model a infinitely big scenes, one could make both the camera pose and the radiance field representation $\\mathbf{W}$ be a function of time step embedding $s$. This will allow for the radiance field to change as the camera moves. We have included this discussion in the revised version of the appendix L480-486.\n",
" The paper proposed a method for the generative modeling of indoor scenes. The model contains a tri-plane based radiance field branch, and a camera trajectory branch. They are trained with indoor scene datasets that include RGBD images and ground truth camera trajectories. \n\nThe model employs an encoder-free architecture similar to DeepSDF [29]. Specifically, each individual scene in the dataset is associated with a learnable latent variable, which is optimized together with the model parameters during training. \n\nThe model is trained with RGB reconstruction loss, depth loss and loss between ground truth and predicted camera trajectories. After the auto-decoder model is trained, a diffusion model can be trained on top of the learned latents to enable sampling. The latent generative model can optionally be conditioned on text embeddings to achieve language-driven generation, provided that the dataset also contains language navigation. ### Strengths\n* Language-driven neural scene synthesis is a challenging problem which can lead to interesting applications.\n* The paper shows that the encoder-free (auto-decoder) training objective used in DeepSDF [29] is also applicable to neural radiance fields. This significantly simplifies the training of a generalizable neural radiance field.\n* The paper demonstrates that when the access to camera poses and depth maps is available, a 3D generative model producing higher quality images can be obtained with the use of stronger supervision signals.\n\n### Weaknesses\n* The comparison with previous methods, shown in Table 2 is not a fair comparison. The proposed model is trained with extra information from the dataset, such as ground truth camera poses, depths, and which images belong to the same scenes, while the baseline methods are trained with unposed images only. This important difference is not addressed in the paper. Instead, the paper choose to attribute the improvement to the learning of better latents (L224-228).\n* Compared to previous methods, the proposed method puts scene-dependent constrains on the camera pose, limiting its use in arbitrary viewpoint synthesis. In fact, it appears that the model performed poorly on novel view/trajectories, as evident by Table 5 \"GAUDI w. Random Pose\" in the supplement material.\n* The benefit of adding perturbation $\\beta$ to the latent variables (L138) is not justified by the experiment. Supp. Table 4 shows that the reconstruction quality strictly decreases with larger $\\beta$. What's worse, Supp. Table 5 also suggests that large $\\beta$ affects generation performance while setting $\\beta = 0$ leads to one of the best performing models.\n * There appears to be strong view inconsistencies in the interpolation video (in the Supp.) as the camera moves within the same scene (geometry or texture changes with camera pose). I wonder why is this happening?\n* I wonder how does the proposed method (auto-decoder + latent DDPM) compare with GAN based methods in terms of synthesizing novel scenes? Does it have the tendency of memorizing the existing training scenes instead of generating novel ones? Covered in the supplemental material.",
" This paper proposes a generative model for both indoor scenes and camera pose trajectories. The method can be trained as an uncondintional sampler or as a onditional one given a reference image/ a text prompt. At the core of the method, is a Denoising diffusion model based latent sampler, together with a scene generator, a trajactory generator, and a NeRF-based rendering pipeline. Compared with previous method, the proposed method scales well for indoor scenes, and generates reasonable results. Originality: \nThe authors proposed a DDPM-based prior sampler together with trajectory and scene generation, which is a hard problem to tackel. The use of DDPM is well justified, as simple priors are limited in modeling different variations of camera poses and scene contents.\n\nQuality:\nThe image generation quality is limited, but is understandable since the task is hard. The authors provided enough results to faithfully represent the quality of the method, which is good. \n\nSignificance:\nI think this paper is of singinifcance especially to people interested in 3D aware video generation. But it should be noted that the method requires GT camera poses to train, and it seems the ARkit results are worse than Replica, potentially because the inaccuracies in camera calibration.\n\nClarity:\nThe paper is well written and easy to follow.\n 1. As mentioned above, how robust is the method to camera pose registration errors? it seems the ARkit results is worse than the results from Replica.\n\n2. How generalizable it is to use a DDPM to sample the latents? Would it still apply to other generative tasks where simple priors fail?\n\n3. How robust is the method to succesfully train? \n\n4. It seems that the generated results lack temporal coherency. Though that's quite common for NeRF based generators, is it possible to visualize the generated geometry/depth maps as well? It seems the NeRF model is view-dependent, would a view independent model be better at consistency?\n\n The authors do not provide limitation discussions; it would be really helpful to have them.",
" This paper proposes a framework for 3D scene generation. The proposed framework enables the modeling of the scene distributions and scene-dependent camera distributions by learning priors from disentangled latent representations for radiance field and camera poses, which are obtained via optimizing a reconstruction objective over the training trajectories. Experimental results show that the proposed framework can be used for both unconditional and conditional generations. Extensive results are provided for ablation study to study the influence of key parameters and modules in the framework. The authors tackle the problem of 3D complex scene generation. For learning priors from a large amount of indoor scene observation trajectories, the authors propose to first learn a disentangled representation and then learn a generative model over the latent representations. The proposed method is technically sound. The writing and organization of this paper are good, and the authors provide sufficient experimental results in the manuscript and supplementary materials.\nThe technical part is not very easy to follow, especially for the conditional generation part. It would be better if more details can be given for better clarity.\n\nOverall, the problem setting and the technical route of this work make sense to me. But there are still converns over the work in its current form. See the issue of the technical part above and more in the following. 1)\tSince the results are demonstrated via short rendered videos under sampled camera path, it’s not that friendly to observe the diversity of the generated scenes. For example, the provided results for Replica in the supplementary video seem very similar. Are there any chances that authors can provide some visualizations of the underlying meshes or global layouts of the generated scenes?\n\n2)\tIs the proposed model able to generate longer camera paths (e.g, camera paths that are longer than the provided ones?)\n\n3)\tCan the proposed model guarantee the continuity and validness of the sampled camera poses where a new latent representation is sampled from the Gaussian distribution? If so, how does the proposed model achieve this? Is it achieved by the noise perturbations in latent optimization and the camera pose normalization?\n\n4)\tHow long does it exactly take to generate a new scene?\n\n5)\tFor text-based conditional generation, there are many artifacts observed within the provided results, what is the possible reason for this phenomenon? \n The authors clearly state the limitations and potential negative impact of the proposed framework.",
" This paper generalizes previous NeRF-based generative models which focus on single objects to 3D scenes modeling where the canonical coordinate system may not exist. Instead of assuming the camera pose distribution to be shared across different scenes, the authors study the case that camera moves in the scene and thus forms a trajectory. They disentangle the modeling of camera pose and scene representation into different latent variables (and decoders) and learned these variables in an encoder-less method. After the latent variables are learned, the authors further learn a diffusion-based prior on them to enable sampling. Experimental results demonstrate the model's ability for reconstruction, interpolation, generative modeling for both unconditional and conditional cases. For Strengths:\n\nI think this paper study an important problem. Recently, the NeRF-based generative models show great potential on modeling single objects or simple scenes. On the other hand, modeling real-word, immersive 3D scenes where agent moves freely in the enviroment, is an important topic for many applications in robotics, VR/AR, but they are relatively under explored for NeRF-based generators. This paper proposes to solve this problem by disentangling the latent representations for camera pose and 3D enviroment. They also introduce the diffusion-based prior to enable sampling. Their model show good performance in both reconstruction and generation tasks. The paper is overall well-written and easy to follow. \n\nQuestions and potential weaknesses:\nI have several concerns/questions regarding the experiments.\n1. Novel view synthesis performance: In 4.2 the authors show the reconstruction results. I assume these results are got from the trajectories/frames used to train the model. But how well does the model do for novel view synthesis? If during training we use a certain trajectory to get the scene representation $z_{scene}$ for a scene, then can this scene representation be generalized to model other trajectories in the same scene? (Or what will the PSNR/SSIM be if I use the learned scene representation to reconstruct a novel trajctory in the same room?) I think this is an important metrics to see whether the model just overfits the observed trajectories or it really builds a valid 3D representation.\n2. Test on novel objects: Similar to 1, how well can the model do to reconstruct a novel scene that has not be seen during training?\n3. For unconditional generation, the authors mainly demonstrate their model's ability through FID score, which is based on the quality of a single image. However, the model proposed here actually generates a trajectory in the scene. Would it be possible to compare the sample quality on the trajectory level (which, besides the quality of each frame, also consider the consistency among each frames)? A reference metrics here might be the FVD [1] score.\n4. For conditional generation, why there are no baselines?\n\n[1] Unterthiner, Thomas, et al. Towards Accurate Generative Models of Video: A New Metric & Challenges \n Please check the Strengths and Weaknesses part.\n\nA minor question I woild like the authors to clearify here is for line 133, what does the \"denoising recontrution objective\" mean here. As I understand, the authors just use l2 reconstruction loss for image and l1 reconstruction loss for pose. Then what does \"denoise\" mean or where does the nose come from? Does it just mean you perturbed z with additive noise during training? Given the authors also use denoising diffusion probablistic model for prior learning in the next part, I think it might be better to make this more clear. I think the authors adequately addressed the limitations and potential negative societal impact of their work.\nBut one potential limitation here might be whether the model can be scaled up to larger scenes like a trajectory accross multiple different rooms or an outdoor scene. The current model uses a single tri-plane representation for the whole scene, which means that the movements of the agent are limited to a predefined area. Then what if the agent moves outside the boundary?Simply enlarging the size of the predefined area size for larger areas or simply enlarging the size of the tri-plane representation seems not to be very efficient. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"WclHdFc2RN",
"zILC_x1xKkd",
"aaKIyoe62G",
"xQ3yNd-IEg",
"EOYsFy9IDb",
"hoYDVrFy0Vm",
"EOYsFy9IDb",
"nips_2022_xijYyYFlRIf",
"K3qL-Gj05QT",
"jSHGkHSBoxs",
"Hq8i3-o_gtw",
"wS5C2Ee7Q13",
"nips_2022_xijYyYFlRIf",
"nips_2022_xijYyYFlRIf",
"nips_2022_xijYyYFlRIf",
"nips_2022_xijYyYFlRIf"
] |
nips_2022_dRgHxaOJsiV | 3DB: A Framework for Debugging Computer Vision Models | We introduce 3DB: an extendable, unified framework for testing and debugging vision models using photorealistic simulation. We demonstrate, through a wide range of use cases, that 3DB allows users to discover vulnerabilities in computer vision systems and gain insights into how models make decisions. 3DB captures and generalizes many robustness analyses from prior work, and enables one to study their interplay. Finally, we find that the insights generated by the system transfer to the physical world. 3DB will be released as a library alongside a set of examples and documentation. We attach 3DB to the submission. | Accept | Reviewers found the presented work to be a usefulf framework, the paper to contain adequate experiments and intereting demonstrations of the framework's capabilities, and be overall well written. They appreciated tha significatn prior results can be replicated easily with the the proposed framework. On the flip side, the paper presents no new fundamentally new tools, no new results, or methodoligcal contributions.
One reviewer pointed out a missing connection to probabilistic programning frameworks and referred to several related works. Checking the referenced papers and tools:
- Pyro is an established library, but is barely connected to any rendering at all and has a different use case than the presented paper.
- Picture seems to never have made it beyond a Julia-based alpha stage with "under heavy initial development and has a lot of known bugs" (author's github) and is thus of no practical importance.
- Other referenced papers have no published source code.
Probailistic programming is indeed a related direction worth discussing, but the existing tools seem to be far away from the proposed work.
Overall, the presented framework could be an interesting debugging tool for researchers and practicioners. Possibly the biggest concern targets maintenance - The value of an open source framework like the one presented here stems a lot from dedicated ongoing maintenance efforts.
I thus strongly recommend following the suggestions by reviewers and adding tutorials (ideally reproducing all the experiments in the paper) as well publishing as the collection of shapes promissed in the rebuttal. | train | [
"80p9YWRfVO8",
"6AqWEq6Qbgm",
"KlvyrU8VDf1c",
"PcFKW_Bg3m",
"qPgGU6p-k4",
"tANkNpVulu-",
"t78ms-7dvyB",
"r1wNvgJoKEU",
"_aOJp1HsmVO",
"mXmaupg0Z0Y",
"_jL100uWT0w",
"hn1-akFIWzG",
"j23-92toEsN"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nThank you for the detailed rebuttal. A key reference for probabilistic programming include - Kulkarni et al, 'Picture: A probabilistic programming language for scene perception', CVPR 2015. There are several frameworks that incorporate 3D graphics and rendering (e.g. Pyro: Deep probabilistic programming framework in 2017 from Uber ). \n\nI keep my rating as it reflects my current opinion on the contributions of the paper.\n",
" We thank the reviewer for raising their score and all their detailed feedback. We will maintain 3DB with our current team. As we start getting feedback from users and this project gets traction, we will assign a dedicated person for maintaining it (with the help of the open-source community). We cannot disclose more details of the maintenance plan for anonymity reasons, but are committed to ensuring that 3DB remains a useful tool for the community!",
" Thanks for the explanation! I understand how difficult it is to capture multiple objects in the scene. It basically verified my initial guess. I'm happy to keep my rating.",
" I will update my review later, but want to give you update now:\n\n1) I am still very sceptical about usability (and ease of use) of 3DB, and how you are going to do a maintenance, given lack of updates so far. However, I wish to be wrong.\n2) \"Furthermore, to reduce the barrier of entry we collected and plan to release a dataset of more than 5000 3D models compatible with 3DB. These models come with a Creative Commons license and cover many common household objects.\" Sounds good.\n\n\nOverall, I will increase my rating to weak accept.",
" >We will leverage the open source community to suggest improvements to the framework.\n\nWell, suggesting improvements is not a maintenance. I am talking about not only adding new features, but also finding and fixing bugs, making changes against API changes in other libraries (bitrot, https://en.wikipedia.org/wiki/Software_rot), etc. \nFrom my (limited) experience, it is either your employer, that sponsors the project, by allowing allocating your work time, or some corporation/foundation supporting with money. Or you are doing yourselves, which is also OK, but not easy at all. ",
" We thank the reviewer for their comments. We address their specific concerns below:\n\n**[How are you going to support and maintain 3DB?]**\nWe will leverage the open source community to suggest improvements to the framework.\n\n**[Do you plan any tutorial (both like web-tutorial, and like CVPR/NeurIPS tutorial), showing how to install and use 3DB?]**\nWe didn’t originally but this is a good suggestion.\n\n**[Do you plan to release pre-rendered datasets, so people could run their evaluations without installing 3DB?]**\nWe have multiple pre-rendered datasets available (some with more than 1M renders). However since renders are relatively inexpensive on standard CPU clusters and 3DB offers almost impossible freedom we suspect that users will want to generate their own. Furthermore, to reduce the barrier of entry we collected and plan to release a dataset of more than 5000 3D models compatible with 3DB. These models come with a Creative Commons license and cover many common household objects.\n\n**[How much time each experiment described in the paper takes?]**\nFor all the experiments in this paper we used the standard YCB dataset and some custom scanned models that we bought from amazon (as described in Section 4 and Appendix E). In both cases, each model had to be cleaned ~30 min. To reduce this time in the future we will release the above-mentioned dataset of 3D models.\nInference time was negligible. Rendering mostly depends on how many cpus are available but since most image classifiers work on small images we often reach throughputs in excess of 100 images per second. The lengthiest part of the process was the post processing and chart generation. For example generating the heatmaps took about 4h of development.\nFinally, we would like to note that it is really easy to use 3DB and go from ideas to results (once 3D models are available). In fact, going from the initial hypothesis of the case study in Section 3.4 to Figure 13 took less than a single day of work for one author.\n",
" We thank the reviewer for their comments. We address their specific concerns below:\n\n**[Only YCB dataset?]**\nWe supplement YCB objects by sourcing additional objects from Amazon, and using a 3D scanner to obtain corresponding meshes as described in Section 4 and Appendix E. The reason we don’t use only YCB objects is that, as the reviewer mentioned, it only has limited tabletop objects.\n\n**[Need for 3D models]**\nOne limitation of our work is the need for 3D models, which we discuss in the limitations section of our paper. We believe though that discovering vulnerabilities in ML pipelines is an important enough problem that people would want to invest time and money to solve. Once 3D models are available (one can use photogrammetry to get started), it is straightforward to do the analysis using 3DB. In fact, going from the initial hypothesis of the case study in Section 3.4 to Figure 13 took less than a single day of work for one author. \n\n**[Results beyond image classification]**\nIndeed, we agree that object detection and semantic segmentation can be more challenging in the case where there are multiple objects in the scene. Originally, we wanted to show that users will be able to glean in-depth insights into model performance from 3DB, and thus elected to prioritize depth of analysis with a single classifier rather than breadth. However, we appreciate the reviewer’s concern about proving the robustness and extensibility of 3DB to other modalities. 3DB is built to be extensible in the sense that one can add any controls they want that allow them to manipulate objects and environments without restrictions, so 3DB certainly supports other modalities.\n",
" We thank the reviewer for their comments. We address their specific concerns below:\n\n**[Object detection tasks]**\nOriginally, we wanted to show that users will be able to glean in-depth insights into model performance from 3DB, and thus elected to prioritize depth of analysis with a single classifier rather than breadth. However, we appreciate the reviewer’s concern about proving the robustness and extensibility of 3DB to other modalities. As mentioned in the paper, switching models other tasks ( such as objection detection) is trivial and requires no or minimal addition to the code.\n\n**[Why are Cup and Pill bottle considered together in Figure 13?]**\nThis is just for visualization reasons so that we have mixtures of the major three colors, but we can also use separate colors for each of them. But really what the plot is saying is that the model predicts either Cup or pill bottle when it is filled with milk.\n\n**[Minor comments]**\nWe will fix those!\n",
" We thank the reviewer for their comments. We address their specific concerns below:\n\n**[What is unique about your framework that allows scalable testing and validation?]**\nWhat makes 3DB unique is its modular design that abstracts away all the rendering details and exposes a user-friendly interface to simulate anything the user wants with user defined policies and controls. This abstraction, combined with an automatic parallelization of renders on many worker nodes, allows for scalable testing and validation of ML models. While some of the platforms we mention in the related work section may share components with 3DB (e.g., the physics engine, photorealistic rendering), they do not share the same goals as 3DB, i.e., diagnosing specific failures in existing models, and they are not as flexible as 3db, in the sense that they allow some fixed transformations only, or require domain knowledge of how rendering works.\n\n**[Comparison to frameworks that support probabilistic programming]**\nWe thank the reviewer for pointing out this connection! Although we are not familiar with the probabilistic programming literature and are unaware of any rendering engine that supports probabilistic programming, we would be happy to discuss 3DB in relation to any of these frameworks, and will do a thorough literature review and contextualization in our next revision. In general, we believe that 3DB is unique in its combination of ease of use, extensibility, and compositionality. \n\nWe would also very much appreciate it if the reviewer could point us to specific frameworks they think are relevant! We would be happy to discuss these explicitly in our next revision.\n\n**[Methodological contributions for synthetic to real transfer/model validation of simulator]**\n3DB uses blender in order to get photorealistic renders so that it effectively diagnoses failure modes in ML models. The most relevant section to the reviewer’s concern is likely Section 4, where we verified that our rendered images actually identify real failure modes by replicating the scenes in a physical setting. To run the experiments of Section 4 we built a rudimentary mobile app (which we can open-source) for properly positioning the objects, and also made use of off-the-shelf 3D scanning technology to get the models.\n",
" A software framework for testing, debugging computer vision models via use of photorealistic simulated data is described. The utility of the framework is demonstrated through a wide range of use cases and the authors illustrate how vulnerabilities in ML models for vision can be discovered. The framework allows for flexible evaluation of robustness and allows the impact of multiple factors to be studied. There is a discussion of how simulated data analysis transfers to real-world data through systematic experiments that attempt to match real to simulated settings. The software will be open-sourced.\n Strengths:\nA useful framework for simulated data generation, testing and debugging of computer vision models. The experimentation is adequate to illustrate the utility of the tools. The part involving the matching of simulated data to real data acquisitions and the experimentation with particular hypotheses on model performance (section 3.4 and 4) are interesting.\n\nOriginality: The concepts presented and the tool itself are not novel and the results are largely confirmatory of well known behavior of ML models (as expected). \n\nClarity: Well written and clear.\n\nSignificance: The paper in its present form may not yet be significant, although if scaled systematically the paper has significant potential. The paper can benefit from a principled incorporation of performance characterization methodology articulated in the 90's in computer vision. (see haralick.org). Incorporation of probabilistic generative models (including the imaging device and rendering pipeline) and enabling of formal comparison of simulated data statistics to real data statistics can enable the tool to be of practical use. I appreciate the level of work going into buliding a framework that can support systematic debugging and testing of computer vision models. Simulation for performance modeling has been studied in the 90's and more recently in the last decade in the context of ML and deep learning. Can you specifically comment what is unique about your framework that allows scalable testing and validation? \n\nHow does your framework compare to frameworks that support probabilistic programming ? These frameworks allow natural integration of graphics engines with probabilistic programs for synthesis of image datasets and exploit them in an inference loop. \n\nApart from the software tool and properties demonstrated, do you have specific methodological contributions for syn to real transfer/model validation of simulator?\n\n\n In the last decade, a number of papers have studied the use of synthetic data for ML including some of the early papers that illustrate in detail performance characterization methodology with synthetic data and deep learning (in the context of semantic segmentation). The authors may benefit from the series of papers below:\nV. S. R. Veeravasarapu et al, Adversarially Tuned Scene Generation. CVPR 2017: 6441-6449\nV. S. R. Veeravasarapu et al, Model-Driven Simulations for Computer Vision. WACV 2017: 1063-1071\nV. S. R. Veeravasarapu et al, Simulations for Validation of Vision Systems. CoRR abs/1512.01030 (2015).",
" The paper presents a framework for systematic testing and debugging of vision models. The framework uses photorealistic simulation and considers different variables, as different object categories, HDRI backgrounds and lighting conditions, different textures, scene transformations as well as custom controls. By considering different combinations of the variables involved, the framework is quite useful in finding vulnerabilities of machine learning models with respect to specific scene configurations. \nOriginality\n-----------\nSimulation based methods for testing and validating computer vision systems, in general, and deep learning models in particular, have been largely considered in the past. Nevertheless, the proposed method is quite comprehensive, providing control over multiple crucial aspects, while at the same time it is highly customizable.\n\nQuality and Clarity\n-------------------\nThe paper is well written and easy to read. To support the proposed framework, multiple well known challenges of computer vision models are highlighted, where it is shown that significant results from the corresponding literature can be quickly and easily reproduced with the help of the proposed framework. A toy hypothesis is also considered, and it is shown that it can be easily accommodated with the help of some customization. The domain gap between real and simulated data is also discussed in sufficient detail, and quantitative comparisons are provided.\n\nSignificance\n------------\nThe use of the proposed framework can significantly simplify the process of testing and validation of computer vision models. The framework is likely more useful for testing and validating different vision models rather than discovering novel general failure modes, as the modes of failure covered by the framework have largely being explored. Nevertheless, it is important that the framework allows for customization, as this can increase also its applicability in searching for novel controls/aspects with respect to which machine learning models may be particularly susceptible.\n\nMinor comments\n--------------\n* L.122 classifier sensitivity classifiers\n* The order the figures are shown does not follow the order in which they appear in the text 1. The text also briefly discusses the possibility to use the framework for object detection tasks. It would be interesting to show some example\n2. Why are Cup and Pill bottle considered together in Figure 13? The paper openly discusses limitations. Additionally, some crucial aspects, such as the real vs simulated world domain gap, are covered in depth in the text.",
" The paper introduces 3DB, a new framework to test vision models using various renderings of 3D simulation. It allows users to discover model vulnerabilities such as textures, corruptions, geometric transformations, different backgrounds etc. Finally, a realistic benchmark is constructed based on available 3D models, to demonstrate the agreement rate between synthetic renderings and real-world images. Strengths:\n\n- The approach is neat. Considering it's very hard to collect real-world images of the same object under different configurations, it's a good idea to utilize 3D renderings.\n- One of my concern is whether the synthetic test set reflect the performance in real-world images. Failing in a synthetic image does not mean it will necessarily fail in the real world. This is resolved relatively well in Section 4 physical realism and I appreciate the experiment. Still, 40 test images look like a small test set. But I understand how hard it is to collect real-world images similar to a rendering. \n\n\nWeaknesses:\n\n- The approach to test vision models are fairly limited by avavilable 3D resources. Most vision models mentioned here only have 2D predictions -- which means the ground truth is relatively easy to collect compared with 3D models. I think it's very hard to completely test the robustness of the model. For example, it's nearly impossible to find a lot of 3D models for each ImageNet category.\n\n- Only image classification is presented while 3DB should support more vision benchmarks such as object detection and segmentation. It is non-trivial and more challenging to render synthetic images for object detection since multiple objects are involved. Randomly assigning a 3D pose may not reflect the real-world distribution at all. My primary concerns are:\n\n- Are you using YCB dataset scanning 3D objects yourselves for all 3D models? I think it only has limited tabletop objects. Is it feasible to test on more categories? \n\n- Do you have any results (even qualitatively) on benchmarks besides image classification? Limitations are discussed in the paper.",
" The paper presents 3DB — library/software package for studying computer vision models sensitivity to various transformations, from the geometrical, like camera pose change, to semantic, e.g. “Does color of the liquid in the cup influence the classification output of the model?”.\n3DR is implemented as (wrapper + UI) around (Blender + deep learning model runner). It (supposedly) comes with some 3D models and background — graphical “assets”.\nThe paper showcases the library with several case studies — background influence, liquid color influence, and zoom/rotation influence on the classification model output. The paper is not a standard NeurIPS paper, so this review would be a bit non-standard as well. The main contribution is not the presented experiments themselves, but rather the software package usefulness for researchers. \nThe package is provided in the supplementary material, so I have tried to install/run it.\nUnfortunately, it is not possible, because there no “setup.py” file, also the requirements.txt are empty. \n\nIn order to try the run the library, I have looked at github for 3db, and luckily found it, under the same org name “3db”. Also, I have seen some github handles of the authors, but none of them is known to me, so I consider myself still in double blind mode. \nHowever, if this is not the case, feel free to ignore my review and ban me. \n\nAnyway, let me continue. The repo was more full and contained instructions how to install the thing, however, MacOS was not supported. Luckily, there was a docker option. Unluckily, it was not working either. I was able to proceed a little bit more, by fixing a couple of mistakes there (e.g. python==3.7, not python==3.7.7 for conda env), but finally gave up, as I spend on this around 1.5 hours without much success. Some paths are hardcoded as well (e.g. threedb_master) , which does not help either.\n\nNow, I fully realize, that I probably was not motivated enough to make it work, also my devops skills definitely suck. \nLet’s meet on the middle ground and say that library is not ready/documented enough for usage by beginners. Expert should definitely be able to run the 3DB, and the middle person - depending on time and motivation.\n\n\nNow, when I fail to try the package myself, I fall back to evaluating it from the paper only.\n\nSupposed UX is the following:\n1. User comes up with hypothesis to test, e.g. if the chair detection performance is heavily dependent on are \n2. User creates/buys 3D model of the chair, Blender-compatible, and some backgrounds\n3. User writes down the config in yaml for rendering parameters\n4. The rendering and inference occurs in a parallelized way (map)\n5. Results are them reduced to the visualization server, where one could generate summarization graphs, as well, as see the failure cases themselves.\n\nThis seems as attractive process except stage 2. \n\nWeaknesses:\n\n1. The installation process is not user-friendly\n2. (stated in paper itself) you are as good, as 3D models you own. No 3D models - no 3db \n3. Although, that title says “Debugging Computer Vision Models”, in fact, only classification and detection is really supported now. Tracking, matching, etc are beyond the score of 3db (at least now. One can extend the framework probably). \n\n\nNote, that I don't evaluate experiments presented in the paper themselves, as they were used just as a showcase for the tool. I like the experiments, though. I actually, have no idea, how to evaluate such paper/tool w.r.t. NeurIPS acceptance. My gut feeling is the following (feel free to correct me if I am wrong, both AC, Rs and authors):\n\nif the tool would be useful for machine learning community, the paper should be accepted. If not, then not. \n\nQuestions (again, non-standard) \n\n1. How are you going to support and maintain 3DB? Would it be someone supporting it at least part time? Open source community, authors themselves/etc?\n2. Do you plan any tutorial (both like web-tutorial, and like CVPR/NeurIPS tutorial), showing how to install and use 3DB?\n3. Do you plan to release pre-rendered datasets, so people could run their evaluations without installing 3DB?\n4. Could you please elaborate on how much time each experiment described in the paper take? With (if possible) a time breakdown: \"search for 3D models online\" - X hours, debugging - Y hours, running rendering & inference - Z hours, etc.\n\n\n****\nAfter-rebuttal update:\n\nAuthors have been addressed 3/4 of my questions and additionally announced to release a dataset of 5k objects under CC license. This significantly reduces my concerns. The last, not addressed one, is maintenance of the library, but let's hope for the best.\nI am increasing my score to weak accept.\n\n\n Yes, authors clearly stated the limitations of their framework"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"_aOJp1HsmVO",
"PcFKW_Bg3m",
"t78ms-7dvyB",
"tANkNpVulu-",
"tANkNpVulu-",
"j23-92toEsN",
"hn1-akFIWzG",
"_jL100uWT0w",
"mXmaupg0Z0Y",
"nips_2022_dRgHxaOJsiV",
"nips_2022_dRgHxaOJsiV",
"nips_2022_dRgHxaOJsiV",
"nips_2022_dRgHxaOJsiV"
] |
nips_2022_Xg-yZos9qJQ | Exploration via Elliptical Episodic Bonuses | In recent years, a number of reinforcement learning (RL) methods have been pro- posed to explore complex environments which differ across episodes. In this work, we show that the effectiveness of these methods critically relies on a count-based episodic term in their exploration bonus. As a result, despite their success in relatively simple, noise-free settings, these methods fall short in more realistic scenarios where the state space is vast and prone to noise. To address this limitation, we introduce Exploration via Elliptical Episodic Bonuses (E3B), a new method which extends count-based episodic bonuses to continuous state spaces and encourages an agent to explore states that are diverse under a learned embed- ding within each episode. The embedding is learned using an inverse dynamics model in order to capture controllable aspects of the environment. Our method sets a new state-of-the-art across 16 challenging tasks from the MiniHack suite, without requiring task-specific inductive biases. E3B also outperforms existing methods in reward-free exploration on Habitat, demonstrating that it can scale to high-dimensional pixel-based observations and realistic environments. | Accept | The reviewers appreciated the fundamental questions the paper was asking, clear writing and argumentation of the paper and convincing empirical experiments. While there were concerns about the theoretical rationale of resetting covariance matrix, the empirical results show it is indeed important. For these reasons, I recommend acceptance. | train | [
"wDJMlk2pbdz",
"SZV6S-r5yvL",
"BktZXS1jEGv",
"KLVreeyFmPx",
"1M1wkPib_EH",
"7zQHwAhkdAL",
"xH0yBCZsiSH",
"Q-Yrd7MO4F-",
"rA3uKN4IFn8",
"bgqs7tohK5",
"V-juRSWtWA",
"zzBncHmSULf",
"tvMCHyAsTr",
"iy3rlnYOSV",
"9Hzh34aJrNh",
"Cmjqk5R2P6U"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for reading our response. Although we still respectfully disagree with the current rating, we do appreciate you raising your score. We address the concerns you bring up below:\n\n$~$\n\n1. Although you say “naively using count-based bonus obviously does not make sense”, we would like to point out that the three best previously published methods for CMDPs that we consider (RIDE, AGAC and NovelD) _all_ use an episodic count-based bonus. Also note that in the papers introducing these methods, the episodic count-based bonus is mentioned as a heuristic rather than the central algorithmic contribution. Our point is this: on one hand, the importance of the episodic count-based bonus has been underestimated by the community, while at the same time, the reason its limitations haven’t been more apparent is because these algorithms have been evaluated on simple MiniGrid environments where there is a small number of states per context (and hence, the count-based bonus works). If we run these algorithms on more complex environments with a large number of states per context, they fail because the count-based bonus is no longer meaningful. **Therefore, we i) point out a major limitation of existing methods, and ii) propose a solution. We believe these are important contributions to the community, even more so because our fix is simple and easy to integrate into other algorithms.**\n\n$~$\n\n2. First, we would like to note that the example you describe (a maze with multiple dead end hallways) is similar in nature to the Bidirectional Diabolical Combination Lock in the PC-PG paper, which requires not just an appropriate bonus construction, but also more sophisticated mechanisms such as a policy cover to solve reliably (note RND, which uses a non-episodic bonus, also fails on that task due to using a single policy). As we mentioned above, it’s not obvious how to use a policy cover in a CMDP where the same environment is never seen more than once. **Second, this is just one example environment, whereas our experiments use over 20 environments, including ones based on real-world scenes. While we do not claim our algorithm works in _every_ CMDP, it works on many of the ones used in empirical research, and serves as an important foundation for future refinements.**\n\n$~$\n\n3. We unfortunately do not follow the reviewer’s reasoning, and do not understand the limitation they are referring to. **Procedurally-generated environments are a type of CMDP, and are the most common type used in empirical research. We also evaluate our algorithm on Habitat, which is a type of CMDP which is not procedurally-generated.** Finally, it is clear that there must be some shared structure among the different environments instantiated by the CMDP - otherwise there is nothing to generalize from. This holds true even in the dense reward case where exploration is not necessary, since the policy must generalize between different environments corresponding to different contexts. \n\n$~$\n\n4. **Although what you describe would be ideal, unfortunately this is not computationally feasible**, since it would require storing all the agent’s experience (up to 50 million samples total), and would require running all of these through the feature encoder every time it is updated (currently, we update it approximately every 80 steps). Even using the naive matrix inversion rather than the efficient rank-1 updates causes a slowdown of 3x, which is not practical (currently, training one agent on one environment takes over 20 hrs on a modern machine with one GPU and 40 CPU cores). \n\n$~$\n\nWe thank you again for engaging with our paper. At this stage, we are unsure of what actionable steps we can take to update our paper so that you will consider increasing your score, but we are open to suggestions and happy to run additional specific experiments (within reason) for the final version. Nevertheless, we hope the above clarifications have fully addressed your concerns and you will consider recommending acceptance.\n",
" I appreciate the authors for their reply. Here I would summarize my arguments with some updates:\n\n1. We agree that the elliptical bonus is suitable for continuous state space (let's put the issue of whether it is episodic aside for now), and naively using count-based bonus obviously does not make sense. I would restate that the elliptical bonus is widely studied both in theory and in practice, so I can not agree this is actually one of the contributions of the paper.\n\n2. Regarding the proposed episodic variant of the elliptical bonus, I would argue the example I mentioned above is actually just a very simple and general example without any specific adversarial construction, so I don't agree it is not representative and can be simply ignored. \n\n3. Regarding the author's claim on the importance of cmdp, I actually think this is the major source of our disagreement. The authors agree that in order for the episodic bonus to work, the problem has to enjoy a certain favorable structure. I think this is why previous papers such as [1] claim that their episodic reward is for \"procedurally generated environments\" instead of claiming to solve cmdp, and there seems no addressing of this limitation in the revised paper. \n\n4. I would like to bring up an issue of the ablation study that the authors mention. In the ablation study, the authors compare the episodic version and non-episodic version of their constructed bonus. However, according to the efficient computation of the covariance matrix that the authors propose in the paper, if they compute the non-episodic bonus in this way, the covariance matrix is aggravated by all previously learned features, which does not make sense anymore because one should use the latest feature and use all previous samples to recompute the covariance matrix from scratch. Please correct me if I understand the construction wrong.\n\nIn summary, I agree, as I mentioned in the previous threads, that the paper shows good empirical results. However, again, the novelty and contribution of the proposed method, compared with previous (learning feature + elliptical bonuses) methods, seems only the episodic version of the bonus for cmdp. The contribution seems limited to me, and I am not convinced episodic bonus is the solution to cmdp. Other factors (such as the changing feature as I brought up earlier) may also result in a boost in performance and I believe a more careful empirical analysis is required. I argue that regardless of the nature of the paper, one algorithm should match its claim at least on an intuition level but I am not convinced in the current form of the paper. Finally, I acknowledge that the wording of my current rating is overly harsh: thus I increased my rating by 1 but I can not assign a score higher than this because that would contradict my overall evaluation of the paper. \n\n[1] Ride: Rewarding impact-driven exploration for procedurally-generated environments.",
" Thank you for your engagement in the discussion and your help in improving the clarity of our paper. That said, we are honestly quite puzzled by your assessment of the paper at this point. To summarize, it seems we agree on the following points:\n\n- Empirical results of our method for the challenging pixel-based photorealistic CMDP Habitat are impressive\n- CMDPs are an important class of RL problems [1] with more and more community benchmarks building upon the framework (e.g., [2]-[8] to name just a few)\n- Episodic bonuses are helpful for practical CMDP problems that enjoy certain structures \n- Naively counting states (using the identity function) is ill-suited for constructing episodic bonuses in large/continuous state spaces (note that this is exactly what our method is addressing)\n\nIn our view, we have clearly demonstrated that our method, building on elliptical bonuses, overcomes the problem of naively counting states for episodic bonuses, that it empirically beats state-of-the-art exploration methods on a wide range of established CMDP benchmark environments, and that it even works for very challenging pixel-based photorealistic 3D environments (Habitat).\n\nWith all respect, if you agree with the statements above (which as far as we understand, you do), then we fail to see how a \"3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.\" is justified.\n\nTherefore, we would appreciate more clarification as to why you still think resetting the covariance matrix is a concern. **We have explicitly run a version of our algorithm where the covariance matrix is _not_ reset, and it performs much worse.** This is in Figure 6, called \"E3B (non-episodic)\". Furthermore, in our response above we have provided intuition for why resetting the covariance seems appropriate for CMDPs where the environment changes each episode (see paragraph 3 in our section on resetting the covariance). \n\n**Note that our paper is not a theory paper but an empirical one**. We do not make any (theoretical) claims that our approach will work in _every_ possible CMDP, but rather demonstrate empirically that our approach outperforms other SOTA methods on many established challenging CMDP benchmarks, with more than 23 tasks in total. Thus, we believe our thorough empirical evaluation of this approach should be enough to warrant acceptance.\nWe acknowledge that it would be interesting to understand from a theoretical standpoint _why_ our method works when it does, and also characterize the cases when it does not work, but this is outside the scope of the current paper. In fact, it often is the case that an algorithm’s effectiveness is first demonstrated empirically and later on a theory is developed to better understand why. We hope our paper will inspire other researchers to further investigate this. However, we do not think that the existence of one possible counterexample invalidates our entire approach and is cause for rejection.\n\n**We would like to emphasize the thorough nature of our empirical results**: our algorithm outperforms 8 competitive baselines, including ones _specifically designed for CMDPs, as well as new variants of existing baselines (such as NovelD-message, NovelD-pos) which we have designed and which improve upon the original baselines themselves_, on 22 diverse CMDP tasks from MiniHack, as well as in a 3D photorealistic setting from Habitat, which are based on _real world_ indoor settings.\n\nFinally, we would like to point out that we are addressing a considerably more general and challenging setting than previous work that only considers singleton MDPs. While our current algorithm may not perform as well as methods specifically tailored for singleton MDPs (like PC-PG/PC-MLP) on certain pathological MDP problems, methods like PC-PG are also expected to fail on CMDPs where the environment changes each episode, as noted in our response above (because the covariance of features computed over one episode no longer makes sense in the context of another episode, since the environment has changed). Thus, we are not aware of any other approach that performs as well as ours in practice on such a wide range of challenging CMDP benchmarks and we believe this in itself is a valuable contribution that other researchers could build on.\n\nWe thank you for your review and consideration. We made a significant effort to address your concerns, and would greatly appreciate it if you would consider raising your score in light of our response. Please let us know if you have any final questions that we could further clarify. ",
" \n\nReferences:\n\n[1] Kirk et al. \"A survey of generalisation in deep reinforcement learning\"\n\n[2] Juliani et al. \"Obstacle tower: A generalization challenge in vision, control, and planning.\" \n\n[3] Cobbe et al. \"Leveraging procedural generation to benchmark reinforcement learning.\" \n\n[4] Küttler et al. \"The NetHack learning environment\"\n\n[5] Samvelyan et al. \"MiniHack the planet: A sandbox for open-ended reinforcement learning research\"\n\n[6] Guss et al. “The minerl competition on sample efficient reinforcement learning using human priors”\n\n[7] Chevalier-Boisvert et al. “Babyai: A platform to study the sample efficiency of grounded language learning”\n",
" First I would like to thank the authors for their detailed response and also apologize for my late reply and certain overlooks in the original review (for example, the random network baselines). I would address some major issues in this followup:\n\n1. First I want to further discuss the contextual mdp setup. I am aware of the setup in my original review but I would provide some additional clarification since it causes some confusion during the discussion. I would argue that we should consider CMDP as a broader class of MDP, because for example, we could consider a singleton mdp as a contextual mdp whose context distribution only has one non-zero point-mass. I appreciate the authors' acknowledgment that my previous maze counter-example would make sense in the singleton mdp case, but we can easily extend it to cmdp case because we can assume everytime the dynamics are nearly identical each iteration (and deterministic), and the goal is sampled in some state that the policy never visited before, then I believe the counterexample should still hold. \n\n2. However, the above argument could be treated as an argument against episodic bonus overall. Here I thank the reviewer for the pointer to section 3 which shows that the episodic bonus is indeed helpful even for other algorithms. Thus it indeed seems like episodic bonus is helpful for practical cmdp problems that enjoy certain structures. \n\n3. I appreciate the authors for acknowledging that counting-based bonuses are ill-suited for large/continuous state spaces. I understand the motivation of section 3 for showing the failure of count-based methods, but I still can not appreciate the way the current motivation is formulated because the ineffectiveness of counting based bonuses for large/continuous state spaces is just too obvious. In addition, if we want to talk about small state/action space, I would also like to point out that **elliptical potential bonuses reduce to count-based bonuses in tabular case.**\n\n4. For the positive side, I am very impressed by the new experiments because the habitat benchmark seems a challenging environment.\n\n5. For a side note, one thing we have been ignoring is that the algorithm is *learning* the feature and the bonuses are constructed by the learning feature. Although again previous algorithms have been studying learning feature + elliptical bonuses, the practical advantage of learning such features on the fly seems still unclear. For example, if we use *ground-truth* feature + elliptical episodic bonuses in the above maze counterexample, it does not seem to work. However, using a shifting feature (even though it is incorrect!) seem to break the deadlock (even though it's not clear to me if it is in a good way or a bad way).\n\nIn summary, I would acknowledge that my major critique still remains, but mostly to the episodic bonuses. However, the most major difference between this work and previous (learning feature + elliptical bonuses) methods is the reset of the covariance matrix (thus episodic bonus), which as I argued above, remains not obvious why it would work without nice context generation property. While I highly appreciate the new experiment result, I would still like to keep my original evaluations.",
" Thanks to the authors for addressing the concerns and updating the paper. I will stick with the current score. This is a nice paper.",
" I would like to thank the authors for addressing the concerns that were raised, and actively improving the paper during the rebuttal phase. Considering the changes to Section 5.2, and the detailed clarifications, I have updated my score.",
" Thank you to all the reviewers for putting time and care into reviewing the paper. We answer individual questions and comments below. Furthermore, in response to Reviewer 1, we have added a revision of the paper which includes new results on a challenging photorealistic 3D environment (Habitat), providing additional evidence that E3B is effective in high dimensional pixel-based settings. While the Vizdoom results were a useful sanity check that E3B can work with pixels, the baselines were already able to solve it, so there was not much room for improvement. We believe this is because unlike MiniHack, Vizdoom is a singleton MDP where the environment is the same at each episode, so existing methods already work well. \n\nWe have added new experiments for reward-free exploration in the Habitat embodied AI simulator, which unlike Vizdoom, is a contextual MDP where the environment changes each episode (each episode, the agent finds itself in a different house). Here we see a large improvement in E3B’s performance over the baselines. We believe this gives further strong evidence that E3B is effective for exploring contextual MDPs (CMDPs) with rich, pixel-based observations, and reinforces E3B’s broad applicability. \n\nDue to the 9 page limit for submissions, we moved the Vizdoom results to Appendix C.2, and also moved the figure in Section 4.1 to Appendix B to make space for the Habitat results. Since an additional page is allowed for the camera ready version, we will add these back to the main text if the paper is accepted. \n\nWe believe we have addressed the reviewers’ concerns and that, as a result, the paper has become much stronger. \n",
" $~$ \n\nThank you for your insightful review, your suggestions to improve the paper, and your support for our work. We are glad you thought the experimental section was convincing and the claims well-argued. We have uploaded a revision based on your suggestions and address the comments below. \n\n- [_While the motivation and the intuition of the method are clear,..._] We would have liked to include the algorithm in the main text itself, but were limited by space constraints. However, since there is an additional page allowed for the camera-ready, we will add this to the main text if the paper is accepted. \n- [_The method is not particularly original, as it can be seen…_] We agree that the idea of elliptical bonuses has been around for a long time and is not new, and we clarified this by adding references in the introduction as suggested. We believe the key contributions of our work are: i) highlighting the importance of the count-based **episodic** bonus in prior approaches for CMDPs, showing why it fails in more realistic scenarios, and ii) fixing this problem by proposing the elliptical **episodic** bonus. To our knowledge, using the elliptical bonus at the episode level has not been done before (prior approaches compute the bonus using all previous episodes), and this is essential for good performance in CMDPs, as evidenced by our ablation in Section 5.1.3 where using the non-episodic elliptical bonus does much worse. Furthermore, existing works using elliptical bonuses in deep RL settings with singleton MDPs (PC-PG, ACB) use different feature extraction methods which we show do not work in our setting. \n\n$~$ \n\n### Main questions:\n\n- [_The method’s clarity can be improved…_] We have clarified how the bonus is integrated with the extrinsic reward function in Section 4.1. Concerning the global term, in initial experiments we also included NovelD’s global term, but this performed similarly to the elliptical episodic bonus alone, so we removed it to have a simpler algorithm.\n- [_I would suggest the authors to ablate…_] That is a very interesting suggestion. We didn’t have the time to do this in the revision period, but will look into it for the camera ready. \n- [_The choice of MiniHack tasks is not well motivated…_] We have clarified this in the paper. As our starting point, we chose the MiniHack tasks from the original MiniHack paper which were not solvable by the methods evaluated in that paper (such as MultiRoom-N4-Locked, MultiRoom-N4-Lava). We then made some of these harder, by adding rooms (i.e. making N6, N10 versions of the tasks). All of the skill-based tasks are taken from the official github repository. We also added several other navigation-based tasks from the github repo: Labyrinth-Small and Labyrinth-Big, and the LavaCrossS\\*N\\* tasks. We also added some new task variants such as MultiRoom-OpenDoor. These are interesting because they highlight the brittle nature of some of the count-based heuristics such as NovelD-message. Indeed, NovelD-message is able to solve the closed door variants, because opening the door causes a message to appear, but it fails to solve the open door variants because no messages appear which provide a novelty bonus. This phenomenon is discussed in Appendix D.4, and more details about the environments are given in Appendix C.1.3. To our knowledge, this work performs the most comprehensive evaluation of MiniHack tasks to date. \n- [_Figure 6 shows how RIDE does not achieve any reward signal…_] First, the RIDE results in Figure 6 use the standard episodic count-based bonus (without heuristics like the ones we used for NovelD, i.e. extracting (x,y) positions or messages), therefore due to the time counter feature in MiniHack this bonus will be constant. Based on our results in Section 3, this is enough to make the algorithm fail. We also tried some limited experiments using the same position-based heuristic as NovelD, but weren’t able to get good results either. Note that the original MiniHack paper also reports that RIDE with a count-based bonus based on (x,y) positions did not give any improvement over IMPALA. \n\n$~$ \n\n### Minor comments:\n- [_line 50: Prior works…_] done\n- [_lines 83-85: …adding references to NovelD and AGAC…_] done\n- [_Figure 1: What is the behavior of each agent…_] We will check this for the final version of the paper. \n- [_line 123: this experiment is only reported for NovelD…_] done\n- [_line 176: Why is $n$ not fixed across experiments?..._] We set the $\\phi$ encoder to have the same architecture as the policy network, and thus set $n$ to be equal to the number of hidden units at the last layer. This was 1024 for MiniHack and 256 for Vizdoom. \n",
" $~$ \n\nThank you for your detailed and insightful review, your suggestions to improve the paper, and your support for our work. We are glad you enjoyed the paper’s look at the limitations of existing methods and ways to fix them. We answer your questions in detail below. \n\n- [_I view the paper as trying to solve two different problems_] Please see our answers to questions 2 and 6 below. ",
" $~$ \n\n### Questions:\n\n1. [_How are differences between contexts defined?..._] For procedurally-generated CMDPs such as MiniHack/MiniGrid, the context can be thought of as the random seed determining the generation process. How much difference this induces between two different environments depends on the task. For example, for the MiniHack-MultiRoom tasks, different random seeds cause very different map layouts and number/type of monsters. For others, the spatial layout is the same but different seeds cause there to be different objects which the agent must use. In all our settings, the reward function is defined the same way. For Habitat, different contexts correspond to different simulated houses, which can differ a lot in their spatial layout and visual characteristics. Concerning the ellipse, it is fit using all the previous samples in the current episode. So after $t$ steps in the current episode, we use $t-1$ samples to fit the ellipse and compute the exploration bonus. \n2. [_The paper also notes that “any feature learning method” could be used, but…_] If there is a natural off-the-shelf feature extractor, then indeed this could be used in place of the feature extractor learned with the inverse dynamics model. Currently, we are not aware of any pretrained feature extractors for NetHack/MiniHack, so we chose the inverse dynamics model. But we agree that it is interesting to compare the performance of different feature extractors. In Figure 7 we compare to two other ones: a simple randomly initialized network (even though this is simple, some other works have shown that this can work surprisingly well in some cases, e.g. [11, 2]), and the policy trunk (which has been used in [5]). Out of these, the feature extractor learned with the inverse dynamics model worked the best. \n3. [_How sensitive are results to the regularization term in Equation (1)?..._] We ran experiments with different values of the $\\lambda$ parameter and added results in Appendix D.5. Compared to our default value of $\\lambda=0.1$, there is no statistically significant drop for $\\lambda=0.01$, but $\\lambda=0.001$ and $\\lambda=1.0$ are a bit worse. This shows that the $\\lambda$ parameter is robust across an order of magnitude. \n4. [_I know the goal isn't explicitly lifelong learning, but I am curious what the authors think…_] That’s an interesting question! One approach could be to define the covariance matrix as follows: $C_t = \\sum_{i=1}^t \\alpha^{t-i}\\phi(s_i)\\phi(s_i)^\\top + \\lambda I$ for some $\\alpha<1$ (say $\\alpha=0.99$ or $0.95$). This way, the contribution of samples in the past to the covariance matrix decays smoothly with time. This can be implemented by updating $C_t = \\phi(s_t)\\phi(s_t)^\\top + \\alpha C_{t-1} + \\lambda I$. However, this gets a bit tricky because we can no longer use rank-1 updates to compute the inverse. An alternative could be to constrain the $C_t$ matrices to be diagonal, in which case inversion is easy and we don’t need rank-1 updates anymore. In preliminary experiments we found that the diagonal approximation still worked well, but this would need to be tested more thoroughly. \n5. [_So we can see a clear performance improvement in Figure 6. But I am curious why…_] We believe the reason is that the Vizdoom tasks are singleton MDPs - i.e. the environment is the same every episode. This is in contrast to contextual MDPs (like MiniGrid/MiniHack/Habitat), where the environment changes every episode. ICM was originally designed for singleton MDPs (in fact, Vizdoom was one of the tasks used in the original ICM paper), so it makes sense that it works well here. What we mean in Section 3 is that the count-based episodic bonus component doesn’t fare well in large/continuous state space. This seems an important driver of performance for CMDPs, but it’s not clear that it’s important for singleton MDPs. For singleton MDPs, the other terms in the bonus may be sufficient for exploration. **Note also that in our new results with Habitat (which is a CDMP), ICM performs much worse than E3B**. \n6. [_How will the count-based methods in Table 1 improve or change…_] If every state is distinct, even if we encode them using the feature extractor learned with the inverse dynamics model before feeding them to the count-based term, it is unlikely to work because the encoder would have to map two distinct states to exactly the same embedding for the count-based bonus to treat them as the same. Even if the distance in embedding space is tiny, the count-based bonus will treat them as separate. However, one could replace the count-based bonus in any of the methods by the elliptical bonus. This was our original approach, but we later found that using the elliptical bonus alone gave equally good performance so we stuck with that to make the algorithm simpler. However, there might be settings in which combining a cross-episode bonus with an episodic bonus would be helpful, which we hope to investigate in future work.",
" $~$ \n\nThank you for your careful reading of our paper and your detailed questions. It appears the main concerns are i) resetting the covariance each episode, ii) unclear contribution compared to PC-PG and PC-MLP, and iii) the limited improvements in Vizdoom with image-based observations. We address each of these points below. The crucial point is that we are considering CMDPs, where at the start of every episode some aspects of the environment change, and that we compare to a wide range of published state-of-the-art baselines in that problem setting.\n\n$~$ \n \n### Limited improvements in image-based VizDoom\n\nThe experiments on Vizdoom serve as a sanity check that our method can work with pixel-based observations, but it is not a CMDP - the environment/map is the same each episode. Since it is a singleton MDP, it is not too surprising that the baselines which are designed for singleton MDPs work well, and our method does not offer much improvement over them. \n\nHowever, since the initial submission we have performed additional experiments in reward-free exploration using the embodied AI simulator Habitat, which we believe is a much more interesting and considerably more difficult setting (please see Section 5.2 in the updated paper). In addition to having rich pixel-based observations and simulating real-world spaces, this is truly a CMDP because in each episode, the agent is initialized in a different simulated house (there are 1000 houses total, split into train/test sets). Here, E3B provides a much more significant improvement over existing methods. These results highlight the relative strength of our method over baselines in the more general (and more difficult) CMDP setting.\n\n$~$ \n\n### Resetting the covariance each episode\n\nFirst, we would like to again emphasize that our algorithm is designed for the contextual MDP framework, where each episode corresponds to a different environment, rather than the standard MDP framework, where the agent is spawned in the same environment each episode. This distinction is important for understanding why the covariance matrix is reset each episode. \n\nThe maze example you describe (if we understand correctly) falls into the standard (singleton) MDP framework where the agent is spawned in the same maze each episode. Let’s assume for simplicity that the feature extractor $\\phi$ extracts the $(x,y)$ position of the agent. In this case, indeed it does make sense to compute the covariance matrix across all of the previous episodes. In this way, if the agent has visited the left half of the maze over the course of previous episodes, then the covariance matrix will reflect this and the bonus will be low for the left half of the maze and high for the right half, which will encourage exploration of the unseen right half of the maze. \n\nHowever, an example which is closer to the contextual MDP framework we are considering would be where the maze is regenerated randomly at each episode (this is similar to MiniGrid/MiniHack tasks where the map layout is randomly generated each episode). In this case, if the agent has visited the left half of the maze at episode N, this doesn’t mean it shouldn’t visit the left half of the maze at episode N+1, because they are two different mazes. Since each episode corresponds to a distinct maze, we want our agent to learn a policy which explores as much as possible of the maze within each episode, and resetting the covariance matrix each episode encourages this. \n\nAlso, observe that the count-based episodic bonus $N_e(s)$ used in RIDE, AGAC and NovelD resets after each episode, and our experiments in Section 3 show that it is an **essential** driver of performance. The way our algorithm resets the covariance matrix after each episode is analogous to the count-based episodic bonus resetting after each episode. \n\nNote that we include a comparison to a version of the algorithm where the covariance matrix is _not_ reset at each episode (this is denoted E3B (non-episodic) in Figure 7), and see a large drop in performance. **Our empirical results show resetting the covariance matrix is indeed beneficial in the CMDPs we consider**. \n\nBased on our extensive experiments on MiniHack, VizDoom and Habitat (a total of 26 tasks, including ones with high-dimensional pixels), we believe that E3B in its current form is effective on a wide range of CMDPs. However, it is possible it would fail on certain problems such as the bidirectional diabolical combination lock described in the PC-PG paper (which is similar to the example you describe). Note however that other methods like RND, which use cross-episode novelty, also fail on this task. It is currently not clear to us how to incorporate ideas such as a policy cover used in singleton MDPs to the CMDP case, but this could be interesting future work. \n",
" $~$ \n### Contribution compared to PC-PG and PC-MLP\n\nAs noted above, the focus of this work is on contextual MDPs, where the environment changes at each episode. Although PC-PG and PC-MLP also use elliptical bonuses, a major difference is that they are designed for singleton MDPs. The assumption that the environment is the same at each episode is central to both algorithms: they both operate by iteratively growing a policy cover that progressively covers the state space of the MDP, and define the reward bonus to be high outside the covered region using the covariance matrix of previous policies computed over previous episodes. However, to go back to the maze example above, if the maze changes from one episode to the next, the covariance matrix of features computed over one episode no longer makes sense in the context of a different episode, because they each correspond to different mazes. Therefore, although they both use an elliptical bonus, E3B and PC-PG/PC-MLP are designed for very different settings. E3B was designed for the CMDP setting which is a more general and challenging one, whereas PC-PG and PC-MLP were designed for the simpler MDP setting. As our experiments show, other exploration methods which are very effective in singleton MDPs such as RND or ICM, are significantly worse than E3B when applied in CMDPs.\n\n\n$~$ \n### Questions:\n\n- [_In figure 1, why are the baselines not performing well…_] For all experiments in Figure 1, we used the official code provided by the authors of the algorithms with the recommended hyperparameters. We believe the fact that the methods do not completely solve the task is due to the difficulty of the task. In Figure 12 of Section D.1 in the Appendix, we provide a similar ablation on two other MiniGrid environments where some of the methods perform better (for example, NovelD gets close to 1 return). Our main point here is that **if the episodic count-based term is removed, these baselines algorithms fail to learn at all**.\n- [_The motivation against using counting-based method seems not very fair…_] We agree that count-based bonuses are ill-suited for large/continuous state spaces. However, existing published state-of-the-art works for CDMPs (RIDE, AGAC and NovelD) all use a count-based episodic bonus, and we show in Section 3 that they all perform poorly if it is removed. Our point is that using a measure of episodic novelty is essential for good performance in CDMPs, but the existing way of doing this (using a count-based episodic bonus) does not scale to large/continuous state spaces. This is why we introduce the episodic elliptical bonus, which also measures episodic novelty, but does scale to large/continuous state spaces. \n- [_In the feature learning part, it is known that the inverse dynamics model is valid only if it is conditioned…_] The inverse dynamics model and feature extractor are trained using the on-policy data generated by the policy. However, note that it maps (s, s’) pairs to actions. Therefore, if the policy takes action $a_1$ at state $s$ the first time, and $a_2$ at $s$ the second time, these will likely generate different next states $s’$ which will allow the inverse dynamics model to disambiguate between $a_1$ and $a_2$. If we have two different actions which induce the same distribution over next states, then this may not be a well-defined problem, but this is probably rare in practice and may not affect the features learned by $\\phi$. \n- [_In result shown in Fig.6, is there any hypothesis why RND completely fails..._] We hypothesize that this is because RND constructs its bonus based on all previous episodes, which is often not appropriate for CMDPs since the environment changes each episode (see our maze example in the section above about resetting covariance matrices). \n- [_The ablation shown in section 5.1.3 is not using very strong baselines…_] We did run E3B using a random network encoder, this is denoted E3B (random enc.) in Figure 7. Please let us know if this is not clear. It is not possible to directly run a baseline with RFF kernels, because the MiniHack inputs are mostly symbolic. \n- [_In Fig.8, is there any possible explanation for why E3B is worse than the baselines in some task?_] E3B matches the final performance of all baselines. It requires slightly more samples for the last task, but we did not tune it much and it’s possible that it would converge faster with some more tuning. In general, we believe it’s hard to draw conclusions from convergence speed unless the difference between two algorithms is really big, because one can always tune methods more to try and make them converge faster. \n\n$~$ \n### Summary\n\nWe hope the clarifications above and the new experimental results on a photorealistic 3D environment are sufficient for you to reconsider your assessment of the paper. Please let us know if you have any outstanding concerns that stand between us and a strong recommendation for acceptance.",
" The paper proposes a method that alternates between learning the feature mapping via the inverse dynamics model and using the learned feature to construct the elliptical bonus for exploration. The paper also analyzes several heuristic exploration bonus method by removing the count-based component in them and shows that they fail under such modification. The paper claims that the proposed algorithm contributes the most to the contextual markov decision processes (CMDP) setting, and shows its empirical competitiveness in the MiniHack benchmark. ## Strength\n\n1. The proposed method, E3B, when evaluated in the MiniHack benchmark, which is a reasonable benchmark for CMDP, shows better performance than the previous empirical methods with heuristic exploration bonuses. The evaluation also contains some variants of the baselines to provide more insights into the results. \n\n2. The paper is easy to follow with several informative visualizations. \n\n## Weakness\n\n1. It is very surprising to see that the proposed algorithm, which resets the empirical covariance matrix at the beginning of the episode (according to Algorithm.1), can actually work. First, this should not work in theory, where your covariance matrix $C$ is constructed using the policy cover defined on all previous iterations. Intuitively, it is also not very clear why this can work effectively. Resetting $C$ at the beginning of the episode means the bonus construction has no memory of what states have already been visited before. Combined with that the algorithm performs policy optimization based on reward + bonus, consider the setting if we have sparse reward and short horizon: for example, if one wants to escape a maze in small time budget, the algorithm could first visit some dead end, but receives high bonuses due to resetting $C$, and update to visit the dead end with higher probability after the PG update, and visit the dead end again but still receives high bonus, and thus eventually stuck?\n\n2. The contribution of this work does not seem obvious, either. Using elliptical bonus in Deep RL or with neural network function approximation has already studied by some previous work. For example, PC-PG [1] (which this paper also mentions) actually uses PPO+elliptical bonus in their experiment. [2] even has a specific Deep RL version that uses deep RL + elliptical bonus and is evaluated in the deep RL benchmarks. If we also consider the representation learning part, [3] provides both theoretical and empirical results with neural network function approximation. In addition, [4] also justifies that a variant of [2] is actually performing feature learning in a noisy system. There seems no comparison against any of these works in this paper, given that the approaches are similar and the above baselines actually also show strong performance in practice.\n\n### references\n\n[1] Agarwal, Alekh, et al. \"Pc-pg: Policy cover directed exploration for provable policy gradient learning.\" Advances in neural information processing systems 33 (2020): 13399-13412.\n\n[2] Song, Yuda, and Wen Sun. \"Pc-mlp: Model-based reinforcement learning with policy cover guided exploration.\" International Conference on Machine Learning. PMLR, 2021.\n\n[3] Xu, Pan, et al. \"Neural contextual bandits with deep representation and shallow exploration.\" arXiv preprint arXiv:2012.01780 (2020).\n\n[4] Ren, Tongzheng, et al. \"A free lunch from the noise: Provable and practical exploration for representation learning.\" arXiv preprint arXiv:2111.11485 (2021). 1. In figure 1, why are the baselines not performing well even with their original constructions (with counts)?\n\n2. The motivation against using counting based method seems not very fair? \"if each state is unique\" (line 102) almost implies that the state space is continuous, then one obviously should never use naive count-based bonus, since count-based bonus only works for tabular MDP. \n\n3. In the feature learning part, it is known that the inverse dynamics model is valid only if it is conditioned on some conditional distributions of actions (for example, policies). In the proposed algorithm, is the conditioned policy being a mixture of all previous policies? Would it providing conflicting information, for example, at some state policy a goes left and policy b goes right? Since the algorithm aims to perform exploration.\n\n4. In result shown in Fig.6, is there any hypothesis why RND completely fails on all tasks?\n\n5. The ablation shown in section 5.1.3 is not using very strong baselines. For example, in [1,2,4], those method are using RFF kernels and random networks like RND as feature mappings, which show convincing empirical performance. \n\n6. In Fig.8, is there any possible explanation for why E3B is worse than the baselines in some task? 1. As mentioned in the previous section, resetting the covariance matrix seems a limitation of the proposed work. \n\n2. The results shown in Section 5.2 shows that the proposed algorithm achieves very limited improvement or even no improvement than previous algorithms on regular MDPs with large state space. ",
" The paper gives evidence of the importance of the pseudo count quantity used in existing exploration algorithms in RL, and proposes a new algorithm to extend the pseudo count idea to continuous state spaces by also proposing a method to learn a feature encoder. They give this evidence through various empirical experiments.\n\nThanks to the authors for putting in the effort in doing this work! Strengths:\n- I appreciate works that take a hard look at existing methods in the manner that is done in this paper and try to understand the fundamental flaws.\n- I like table 1 of distilling the exploration methods to their essence, and also I like figure 3.\n- The time-counter example of how states can appear unique is a nice one for intuition.\n- The paper is generally well-written and easy to follow.\n\nWeaknesses:\n- I view the paper as trying to solve two independent problems, but the paper makes it seem they are intrinsically tied together. The two problems are: 1) learning a feature encoder and 2) exploration. While the paper does include an ablation study showing how important both these parts are, it would be nice to get a better understanding of these two independent components. For example, how does using the learned encoder improve other existing count-based methods, or how does using off-the-shelf feature encoders affect the performance of the elliptical exploration algorithm?\n Questions/Suggestions:\n- How are differences between contexts defined? I see that there is a distribution over contexts but how extreme can differences between two contexts be? It doesn't appear to be the case here, but generally does the reward function change between contexts too?\nHow many samples are actually needed for fitting in the ellipse? \n- The paper also notes that “any feature learning method” could be used, but it proposes to use the inverse dynamics model. I am curious how off-the-shelf feature extractors perform with the elliptical exploration algorithm? It seems like this problem is an independent problem itself, and it would make sense to leverage existing works instead of re-inventing something new.\n- How sensitive are results to the regularization term in Equation (1)? In my experience, I have found this term to be sensitive.\n- I know the goal isn't explicitly lifelong learning, but I am curious what the authors think about getting E3B (non-episodic) from Figure 7 to work in that setting? In a lifelong setting where the agent is constantly interacting with no clear start and end to different contexts, I wonder how the proposed method would work then and how it compares to existing methods?\n- So we can see a clear performance improvement in Figure 6. But I am curious why ICM performs similarly to E3B on the high-dimensional Vizdoom domain, especially given that the motivation was current methods don’t fare well in high dimensional/continuous state spaces?\n- How will the count-based methods in Table 1 improve or change if you use the proposed feature encoder (inverse dynamics model) as input into these existing exploration methods instead of using the proposed elliptical count-based algorithm?\n Yes, they have.",
" This work highlights an important weakness for existing exploration methods relying on intrinsic motivation in CMDPS, namely their over reliance on a count-based episodic term. Since this count-based term is computed on hand-engineered features, or on the entire observation, such methods underperform in noisy or more realistic environments. The authors propose to replace this term with an elliptical episodic bonus, computed on learned features extracted by inverse dynamic modeling. This simple modification significantly improves the performance of intrinsically motivated agents in complex environment which differ across episodes. This is shown through experiments on the MiniHack suite, and potential for scaling to image-based tasks is presented on VizDoom. The authors also report some interesting nuances in environments, and clarify the importance of heuristic choices in existing methods. **Strengths**\n* The claims made through the paper are modest, but well argued and sufficiently supported by experimental results.\n* The paper is well written and easy to follow. The motivating weakness of existing methods is outlined well, and the idea of elliptical bonuses is clearly presented, and motivated from different point of views.\n* The experimental section is convincing: the metrics reported follow a statistically principled evaluation approach, baselines are satisfactory and their behavior is well explained.\n\n**Weaknesses**\n* While the motivation and the intuition of the method are clear, the main body of the paper does not explicitly describe the algorithm itself, which is relegated to the Appendix. As a result, this part of the paper is lacking in clarity.\n* The method is not particularly original, as it can be seen as an incremental generalization of count-based exploration bonuses to continuous settings. **Main comments**\n* The method's clarity can be improved. While the computation of the elliptical bonus is clear, its integration in the reward signal is not. This should be clearly discussed in the main body of the paper. Furthermore, to the best of my undestanding, E3B's reward only includes an episodic term, while RIDE, AGAC and NovelD also include a global term. While the lack of robustness in the episodic term is understandable, it is not clear why E3B also removes the global term.\n* I would suggest author to ablate the method by replacing the elliptical terms with a hash-based count term [1], which can be computed over the representations learned through inverse modeling.\n* The choice of MiniHack tasks is not well motivated. I am not fully informed on common evaluation protocols on MiniHack, but to my understanding the authors performed a selection of tasks. The reasoning behind this selection should be motivated, particularly in relation to prior works.\n* Figure 6 shows how RIDE does not achieve any reward signal in any of the tasks considered. Can the authors provide an intuitive explaination for this result?\n\n**Minor comments**\n* line 50: Prior works on elliptical bonuses should be cited before the Related Works section on the last page. I would suggest to add them here, in order to clarify that this idea is not novel.\n* lines 83-85: I would recommend adding references to NovelD and AGAC to better assist the reader.\n* Figure 1: What is the behavior of each agent when disabling the count-based episodic bonus? Is their explorative behavior less extended, or do they fail to move at all?\n* line 123: This experiment is only reported for NovelD, but the reasoning for this choice is only provided on line 243. I would recommend to move this explanation here.\n* line 176: Why is $n$ not fixed across all experiments? How is its value selected?\n\n**References**\n\n[1] Haoran et al. \"#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning.\", 2017 The authors do not overstate their contribution, and the limitations of their work are properly presented."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"SZV6S-r5yvL",
"KLVreeyFmPx",
"1M1wkPib_EH",
"1M1wkPib_EH",
"tvMCHyAsTr",
"V-juRSWtWA",
"rA3uKN4IFn8",
"nips_2022_Xg-yZos9qJQ",
"Cmjqk5R2P6U",
"9Hzh34aJrNh",
"9Hzh34aJrNh",
"iy3rlnYOSV",
"iy3rlnYOSV",
"nips_2022_Xg-yZos9qJQ",
"nips_2022_Xg-yZos9qJQ",
"nips_2022_Xg-yZos9qJQ"
] |
nips_2022_h4kN_apci_R | Probabilistic Missing Value Imputation for Mixed Categorical and Ordered Data | Many real-world datasets contain missing entries and mixed data types including categorical and ordered (e.g. continuous and ordinal) variables. Imputing the missing entries is necessary, since many data analysis pipelines require complete data, but challenging especially for mixed data. This paper proposes a probabilistic imputation method using an extended Gaussian copula model that supports both single and multiple imputation. The method models mixed categorical and ordered data using a latent Gaussian distribution. The unordered characteristics of categorical variables is explicitly modeled using the argmax operator. The method makes no assumptions on the data marginals nor does it require tuning any hyperparameters. Experimental results on synthetic and real datasets show that imputation with the extended Gaussian copula outperforms the current state-of-the-art for both categorical and ordered variables in mixed data. | Accept | The authors propose a single and multiple missing value imputation method for mixed data under the MCAR (missing completely at random) assumption. The method is based on using a latent Gaussian distribution in the form of an ordinary Gaussian copula model for ordered data (ordinal and continuous) and an extended Gaussian copula model proposed by the authors for categorical variables. The author showcase impressive results for their approach, which outperforms standard imputation methods in both synthetic and real-world experiments.
The reviewers agree that this is overall solid work that makes important technical advances and illustrates the usefulness of the approach. The authors were able to address the main concerns by the reviewers in the discussion. There are some remaining questions, but they appear to be not of a nature that would generally question the results or the overall contribution. Moreover, reviewer JaBb quite strongly supports the acceptance of the manuscript.
Taken together, I think this manuscript is a very good submission and I support it's acceptance.
| train | [
"ub25V7QjwgK",
"QtBcIZC-bu",
"NEii1FtBrQc",
"SufR8LO7F8",
"ndecbGw-Fk-",
"y-sryl9C3HH",
"jpEiFlIZJtw",
"Gm_Cubo-GLu",
"AhGUlieRiir",
"QEz_M8UcDAx"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for raising your score! We actually selected the value of M and beta on randomly generated category probabilities before we conducted any experiment reported in this paper. Thus the marginal estimation performance on our selected M and beta are already test data performance! \n\nYou could also refer to our response to reviewer JaBb for more discussion on selected optimization parameter, which I pasted below for your convenience:\n\nIn short, larger M leads to better quality of the Monte Carlo (MC) approximation in Eq (5) and larger beta leads to better argmax probability approximation. Theoretically speaking, very large K requires very large M for accurate MC approximation, and very small category probability requires very large beta for accurate argmax probability approximation. However, in practice, it is rare to have K larger than 50. Also, if there exists a very small category probability (<1e-4), i.e. a very rare category, the limited samples may not be sufficient to learn relationship regarding the rare category. We found our algorithm with default values still has satisfying accuracy in our synthetic experiment variants with K=50 and tiny category probability (as low as 1e-4). Thus our provided default values of M and beta should suffice for most realistic cases.\n\nIf a categorical variable with more than 50 categories or very small category probability (<1e-4) does exist, one may want to preprocess it by merging similar categories and dropping rare categories, but one could also increase M and beta accordingly. The fact that larger M and beta are always better makes its tuning very easy if ever needed.",
" I would like to thank the authors for clarification and for performing an extra experiment. I rather increase my score for this paper. I still believe more experimental and theoretical validations for challenging MNAR scenarios are needed to make this work stronger. Also, I have still some concerns about how $M$ and $\\beta$ are selected. The authors mentioned that they took the largest possible value but how this largest value is decided? (on training data on the test data). How the user of your method should decide these values on different datasets?",
" Thanks for your question. We add more discussion here to address the concern regarding optimization hyperparameter and will add it to our revised paper.\n\nIn short, larger M leads to better quality of the Monte Carlo (MC) approximation in Eq (5) and larger beta leads to better argmax probability approximation. Theoretically speaking, very large K requires very large M for accurate MC approximation, and very small category probability requires very large beta for accurate argmax probability approximation. However, in practice, it is rare to have K larger than 50. Also, if there exists a very small category probability (<1e-4), i.e. a very rare category, the limited samples may not be sufficient to learn relationship regarding the rare category. **We found our algorithm with default values still has satisfying accuracy in our synthetic experiment variants with K=50 and tiny category probability (as low as 1e-4). Thus our provided default values of M and beta should suffice for most realistic cases.** \n\nIf a categorical variable with more than 50 categories or very small category probability (<1e-4) does exist, one may want to preprocess it by merging similar categories and dropping rare categories, but one could also increase M and beta accordingly. The fact that larger M and beta are always better makes its tuning very easy if ever needed. ",
" Thank you for clarifying the point regarding the covariance of **z** in Equation (1). \n\nI think reviewer **5kUW** makes a good point regarding the claim that no hyperparameter tuning is necessary. Are you claiming that the optimization hyperparameters never need to be changed, i.e., $M = 5000, \\beta=1000$ will suffice whatever the size of the problem (K) is?\n\nI found one more typo in line 251: \"hyperparamters\".\n\nApart from that, I remain positive about the paper, having read the other reviews and the rebuttal.",
" Thanks for your detailed comments, which we address below. For your comments listed in the Weakness:\n1. The previous work [1] is the motivation and building block of our paper. However, **[1] can only be used on ordered data, and cannot be used for any categorical with more than 2 categories**. Their software cannot be used to impute categorical data even when one-hot encoding is applied. That is because their imputation cannot enforce the constraint that among a set of 0/1 binary variables generated by one-hot encoding a categorical variable, only one can be 1. That is why [1] was not implemented in this paper. There is no obvious way to solve our problem using their provided codes. **Our paper solves the problem of categorical imputation**: using the proposed Gaussian-max distribution for categorical variables, we extend [1] so that it can be applied to datasets with an arbitrary number of categorical variables. Additionally, we take it seriously to acknowledge existing work and we referenced [1] 11 times throughout this paper (2 in introduction and 9 in methodology), to distinguish our original contribution from existing work. We will add the above discussion into the main paper.\n\n2. The imputation has two parts: first, it estimates a model from partial observations and then imputes the missing entries based on the estimated model. The second step does not make any assumption on the missing mechanism. For our proposed EGC model, the first step has two parts: marginal estimation and correlation estimation. The marginal estimation accurately matches the empirical marginal distribution, so it requires the missing uniformly at random (MCAR) mechanism to be accurate. With an accurately estimated marginal, the correlation estimation only requires missing at random (MAR) mechanism to be consistent, according to [2, Chapter 6.2]. Quantifying the influence of a violated missing mechanism on the resulting imputation is challenging, for almost all imputation methods. We will add this discussion into our main paper.\n\n3. The cubic time complexity is not a problem for skinny datasets with a few hundred features, as shown in [1, Section 7.3], and skinny datasets are common in practice. Thus we believe our paper makes a useful contribution to the current literature. Scaling to wider datasets is an interesting and important challenge, but it is beyond the scope of this paper. Nevertheless, we described one potential way for our proposed EGC to scale to wider datasets in the supplement (Sec 1.5). \n\n4. Multiple imputation (MI) is often used in specific case studies ([3] e.g.) where no ground truth is present. There is no generally accepted metric to compare MI methods. Nevertheless, we designed a synthetic experiment scenario, where the true distribution of missing entries can be accurately approximated, to showcase that **our MI provides more accurate distribution estimation of the missing entries than MICE using much less time**, and MICE is the only other imputation method implemented in this paper that supports MI. The experiment and result is put in the appendix A of the revised paper. \n\n5. We do not treat M and beta as hyperparameters because (1) larger values of M and beta always bring more accurate approximations (Eq 5) and thus also more accurate parameter estimation; (2) we use fixed values of M and beta throughout all experiments in this paper. The values we use are large enough: further increases do not yield significant performance improvements, but merely increases computation time.\n\nFor your Q1 and Q2: mu1=0 is a sufficient but not necessary condition for the Gaussian-max distribution to model an arbitrary 1-dimensional categorical distribution. This is the result of Theorem 1, which requires no restrictive assumptions. Assumptions are needed only to ensure the model can match a *multivariate* distribution, as in Definition 1 and 2. In other words, the assumption is only on the multivariate dependence, but not on the marginal distribution of an individual categorical variable. That is, **the EGC model fits arbitrary (ordered and categorical) marginal distributions exactly.** We think that’s awesome! As to whether / when the multivariate assumptions hold: that’s a super interesting question, and definitely worth studying, but beyond the scope of the present paper.\n\nFor your Q3: \nOur method does not depend explicitly on the rank of the dataset, unlike methods like softimpute and imputeFAMD. As a result, we expect that the method will perform well (at least relative to these others) on data with high rank. \n\n[1] Zhao, Yuxuan, and Madeleine Udell. \"Missing value imputation for mixed data via gaussian copula.\" KDD 2020.\n\n[2] Roderick JA Little and Donald B Rubin. 2002. Statistical analysis with missing data. \n\n[3] Hollenbach, Florian M., et al. \"Multiple imputation using Gaussian copulas.\" Sociological Methods & Research 2021",
" Thank you for your positive feedback and pointing out the typos, which we will correct. For your question regarding the scalability, we want to first clarify that each iteration of the model fitting algorithm **scales linearly in terms of the number of samples (n)**, as discussed in lines 220-230. Thus, a larger n is not a problem for using our proposed EGC model. We also added a large data experiment to address your concern empirically. \n\nThe added experiment is on the synthetic dataset with 15 variables (5 categorical) in Sec 3.1, but with larger sample sizes 10000 and 20000 (originally 2000 in Sec 3.1). We implement our EGC and missForest here only, since MICE is too slow and low rank methods naturally scale well on large datasets. Table 1 reports the results. The runtime of our EGC increases much slower than missForest and it achieves better imputation performance. **The results here indicate that for large n datasets, missForest can be prohibitively expensive while our EGC still scales well.** \n\nTable 1: Added synthetic experiment: Mean (sd) of runtime in seconds and imputation error for each variable type (cat for categorical, cont for continuous and ord for ordinal), over 10 repetitions. See Figure 1 in the main paper for the error metric. \n\n\n| n = 2000 | Runtime | Cat Error|Cont Error|Ord Error|\n| ------------- |-------------|-------------|-------------|-------------|\n| EGC (our)| **33 (2)** |**0.64 (0.01)**|**1.81 (0.06)**|**0.50 (0.02)**|\n| missForest | 53 (11) |0.68 (0.01)|2.06 (0.07)|0.59 (0.02)|\n| **n = 10000** | \n| EGC (our)| **107 (4)** |**0.64 (0.01)**|**1.81 (0.04)**|**0.45 (0.01)**|\n| missForest | 1006 (70) |0.66 (0.01)|2.05 (0.04)|0.52 (0.02)|\n| **n = 20000** | \n| EGC (our)| **202 (9)** |**0.64 (0.01)**|**1.81 (0.03)**|**0.42 (0.01)**|\n| missForest | 3714 (267) |0.66 (0.01)|2.04 (0.04)|0.48 (0.01)|",
" Thank you for your positive feedback and for pointing out the typos, which we will correct. For your question, we want first to clarify that it is okay to use a less restricted covariance of z in (1), see [1]. However, that will introduce redundant parameters. One major contribution we make is the observation that an identity covariance for (1) suffices to model any univariate categorical distribution (Thm 1). Relaxing to partially ordered z is a fascinating point, and it may be used to generate variables of special types. We consider it outside the scope of this paper as partially ordered z is not naturally suited for modeling a categorical variable.\n\n[1] Christoffersen, Benjamin, et al. \"Asymptotically Exact and Fast Gaussian Copula Models for Imputation of Mixed Data Types.\" Asian Conference on Machine Learning. PMLR, 2021.\n",
" The authors propose a single and multiple missing value imputation method for mixed data under the MCAR (missing completely at random) assumption. The method is based on using a latent Gaussian distribution in the form of an ordinary Gaussian copula model for ordered data (ordinal and continuous) and an *extended* Gaussian copula model proposed by the authors for categorical variables. The author showcase impressive results for their approach, which outperforms standard imputation methods in both synthetic and real-world experiments. This work represents a novel and sound method for handling single and multiple imputation for mixed data. The extended Gaussian copula probabilistic model using the argmax transformation is, to the best of my knowledge, novel. The results showcased are impressive and suggest that the work may have significant impact. Finally, the paper is almost flawlessly written and clearly structured.\n\nMinor comments:\n- p1 (page 1), l4 (line 4): Insert 'this is' before \"challenging\".\n- p1, l19: \"we\" needs to be capitalized.\n- p5, l189: \"expection\" $\\to$ 'expectation' Would it make sense to relax the constraint for the covariance of **z** in (1), for instance if there are variables that are partially ordered? There are no potential negative societal impacts for this work that I can imagine.",
" The authors through this manuscript propose a probabilistic model to impute missing values of mixed data including continuous, categorical and ordinal variables that supports both single and multiple imputation. The proposed model is based on extended Gaussian copula, is free of hyperparameters, and makes no assumptions on the marginal distribution of the data types. Authors run a series of experiments to compare the accuracy of the imputations of the proposed model with several competitors. Authors did a good job in articulating the importance of their study and it's contributions to the literature including the shortcomings of the existing imputation methods and the proposed algorithm methodology. The proposed methodology is evaluated through a series of experiments to compare the accuracy with other competing methods. I agree with the authors that several existing methods such as MICE, MissForest etc. are good but converge slowly especially for large datasets, however, I strongly believe as part of their experiments authors should have evaluated real world applicability by testing the performance of the proposed EGC model on a large dataset and test it's scalability. 1) The dataset used in Synthetic experiment has 2000 samples, and in Real data experiments a maximum of just 4177 samples. Did the authors test the convergence of the EGC model on a much larger dataset? If yes, what were the drawbacks you noticed with the performance?\n2) Also, noticed a minor typographical error at Line 49, \"approach can explicitly modeling the categorical distribution\" Other than the scalability of the proposed EGC model, authors did address the limitations of the study.",
" This manuscript proposes an extension of the Gaussian copula model for imputing missing values on mixed non-ordinal (e.g., categorical) and ordered (e.g., continuous) data. The authors show the superiority of the proposed method on 1 synthetic and 6 real datasets in comparison to several competing approaches. # Strengths\n1) The paper addresses an important and well-defined problem in the machine learning context.\n2) The text is clear and the technical contributions are well presented.\n3) The proposed method shows a substantial improvement over the competing methods.\n\n# Weaknesses\n1) The main contribution of this manuscript overlaps substantially with a previously published work \"Missing Value Imputation for Mixed Data via Gaussian Copula\" (https://dl.acm.org/doi/abs/10.1145/3394486.3403106, which is not even mentioned in the related works), which affects negatively the significance of this work. The authors must i) discuss the added value of their new method and ii) compare its performance with the existing one. \n2) Even though some results for the MAR and MNAR scenarios are presented in the supplementary material since all the contributions in the manuscript are made based on the MCAR assumption, the author must discuss thoroughly how violating the MCAR assumption may affect the assumptions of the proposed model.\n3) The computational complexity of the method is cubic with respect to the dimension of the latent which is restricting in practice. The authors propose to use a low-rank approximation of $\\Sigma$, but the effect of such an approximation is not evaluated in the experimental results. \n4) While promised as a method for multiple imputation (lines 63-65), the method is not evaluated for multiple imputation in the experiments (stating that the correct distribution is mostly unknown). Maybe this could be evaluated on synthetic data at least.\n5) While the proposed method has two hyperparameters (referred to as optimization hyperparameters $M$ and $\\beta$), the authors claim that their method has no hyperparameters. Further, it is not very clear how the values for these hyperparameters are specified. 1) in lines 93-94, why the assumption of $\\mu_1=0$ is necessary for the model to work?\n2) in lines 102-103, why the Gaussian-Max distribution assumption is necessary for $x_j$, and how this assumption affects the generality of the proposed model? Does it always hold? What happens if it does not hold? How can we check in practice whether this assumption holds?\n3) how does the proposed method perform when the input data has a very large rank?\n The authors must include a section to discuss the limitations of the proposed method."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"QtBcIZC-bu",
"ndecbGw-Fk-",
"SufR8LO7F8",
"jpEiFlIZJtw",
"QEz_M8UcDAx",
"AhGUlieRiir",
"Gm_Cubo-GLu",
"nips_2022_h4kN_apci_R",
"nips_2022_h4kN_apci_R",
"nips_2022_h4kN_apci_R"
] |
nips_2022__L7f0ySKMWY | Near-Optimal Multi-Agent Learning for Safe Coverage Control | In multi-agent coverage control problems, agents navigate their environment to reach locations that maximize the coverage of some density. In practice, the density is rarely known $\textit{a priori}$, further complicating the original NP-hard problem. Moreover, in many applications, agents cannot visit arbitrary locations due to $\textit{a priori}$ unknown safety constraints. In this paper, we aim to efficiently learn the density to approximately solve the coverage problem while preserving the agents' safety. We first propose a conditionally linear submodular coverage function that facilitates theoretical analysis. Utilizing this structure, we develop MacOpt, a novel algorithm that efficiently trades off the exploration-exploitation dilemma due to partial observability, and show that it achieves sublinear regret. Next, we extend results on single-agent safe exploration to our multi-agent setting and propose SafeMac for safe coverage and exploration. We analyze SafeMac and give first of its kind results: near optimal coverage in finite time while provably guaranteeing safety. We extensively evaluate our algorithms on synthetic and real problems, including a bio-diversity monitoring task under safety constraints, where SafeMac outperforms competing methods. | Accept | This paper presents a novel method for multi-agent coverage control over an unknown density and safety constraints. There is some concern about the level of significance of the approach but it is interesting and sound. There were also concerns about scalability and the use of GPs for density modeling but the authors have sufficiently addressed these in the response and updated paper. The paper would be strengthened by highlighting the contributions and more extensive experiments to show the benefits of the approach in different settings. | train | [
"LEef8Omnr4f",
"D-uIlgUL9jh",
"fHWmzhqwVZ",
"-dxL5SZNZu-",
"SK2RmD-d_l",
"GK_WK4g_bI",
"AETSVcVbsNK",
"-S01qHdgp_t",
"Elm2AwFgU_U",
"ppd4VfPRoPI",
"N8mHxBnyqgz",
"Gtj1-QZAA5",
"bWFSOUxjnLe",
"ifyqmMr8mFh",
"pzo92J0BzW"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > As in the original submission there appears to be a region where the proposed approach is most useful (for low to medium number of samples, PassiveMac is far better), for high number of samples, Two-Stage is about equal or not much worse. \n\nFor the reviewer comment above, we would like to add that,\n\nPassiveMac is a heuristic that does not have theoretical guarantees. It converges to a local optimum, which can be far from optimum as well depending on the environment (in particular based on how the unknown constraint function looks like), e.g. for the obstacle and the GP environment, in fig 5a,b (in the appendix, Page 42) PassiveMac curves are different. \n\nBoth Two-stage and SafeMac have optimality and safety guarantees. However, the Two-stage algorithm requires more samples, as shown in fig 3b. ",
" Thank you for your response. The main goal of the empirical study is \n\n>(\\*) “to show that SafeMac finds a comparable solution to two-stage in a much sample-efficient way whereas PassiveMac quickly gets stuck in a local optimum.” \n\nWith the scaling experiment, we show that (\\*) also holds for a large number of agents and larger domains. This is clear from any plot in figure 6.\n\nThanks for the feedback to keep the same y-axis range, it may not provide new information about (\\*). However, it can be interesting to see the coverage trend with scaling (larger K and domain size). We will include it in the final version of the appendix. ",
" Thanks for your reply. Apologies that we didn’t answer individual typos/clarification, we wanted to keep the response short for the reviewer time.\n\n1. The algorithms shall start with t=1 instead of t=0. All the notations are as per that.\n2. Yes, you are right, the sample complexity bound in corollary 1 should be >=. (similar to theorem 2, line 232)\n\nWe will address all the typos and the required clarifications in the final version.",
" Figure 6 explores performance for a wider range of conditions (scalability). As in the original submission there appears to be a region where the proposed approach is most useful (for low to medium number of samples, PassiveMac is far better, for high number of samples, Two-Stage is about equal or not much worse. It may be sensible to use a same y-axis range per row. At the moment, the range is different depending on both row and column which makes it hard to compare the results.",
" I can see some revisions have been made to the documents (e.g. new Figure 6 in the Appendix), however, it seems the following of my questions were not addressed.\n\n1) I am not sure if Algorithms 1 and 4 are fully correct. In Lines 3 and 5, respectively, initially when t=0, they probe w_0, which is not initialised. Should this be w_1?\n\n2) Corollary 1 seems to suggest that choosing t_p^* = 0 always works. I think it should be >= not <=.\n",
" Dear Reviewers,\n\nWe thank all of you for your time and valuable feedback. We believe that we have addressed all your concerns. If you have further doubts, we would be happy to discuss them. Otherwise, please consider raising your scores.\n\nThanks,\n",
" We thank all the reviewers for their time and valuable feedback. Here, we address common questions raised by most reviewers.\n\n**Scaling:** Our algorithm evaluates a greedy solution K times (one for each agent) at each iteration. Therefore, it is linear in the number of agents. Moreover, the greedy algorithm is linear in the number of cells (domain size). To demonstrate scalability in practice, we added experiments with 3, 6, 10 and 15 agents each with domain sizes 30x30, 40x40, 50x50 and 60x60 in appendix G.1 (Page 44, last page in the revised supplementary). \n\n**Path Planning:** \n- **Agent moving within the safe set:** We do *NOT* assume that agents can jump within the safe set. Instead, given two pessimistically safe nodes in the graph, our analysis guarantees that there is a path within the pessimistic safe and ergodic set connecting them. Therefore, we can always obtain a safe path using Dijkstra or A*. In reality, the robots would use a low-level controller (e.g., PID or MPC) to track this high-level path. \n- **Safety @ EbSJ:** Since this path is contained in the safe set, the agents follow a safe trajectory, if it tracks it sufficiently well (which can be achieved with a well-designed low-level controller).\n- **Dynamics constraints @ 9Vst:** The dynamics are modelled using a graph where nodes represent states and edges represent transitions. This representation is general and can embed dynamic constraints (e.g. a 4-wheeled vehicle could go forward, forward left, forward right but not just left or right). \n\nApologies if this was not clear in the paper, we will elaborate more on this in the final version of the paper.",
" We would like to thank reviewer rxa3 for the valuable feedback.\n\nPlease see our answers below:\n\n**Summary of the paper:** We would like to clarify that the first algorithm (MacOpt) is also a multi-agent one.\n\n**Scaling:** Please see the common response at the top.\n\n**Related literature:** The coverage control literature using RL considers settings and applications that differ considerably from ours. The major differences are, \n- They consider an episodic setting whereas we consider a non-episodic one where agents learn a near-optimal solution in a single trajectory.\n- [2,3,4,6] do not consider constraints. [5] uses heuristics and tuning to satisfy constraints but it incurs violations during training. In contrast, SafeMac guarantees safety at all times. \n- [1,2,4] solve a different problem (using Lloyd’s algorithm, more in our related works). Moreover, almost all of them are developed for working with a different sensor observation model (e.g, patch observation of nearby cells) [1,3,4,5,6].\n\nThe differences above make a suitable and fair comparison rather difficult, if not impossible. However, we will include several of the works mentioned here in the related works.",
" We thank the reviewer EbSJ for the detailed review and the constructive feedback. We are glad that the reviewer acknowledges the novelty of the work.\n\nThank you very much for letting us know about the typos and clarifications, we have included all of them in the paper. Please see our responses to the major concerns below:\n\n**Strength and weakness**\n\n**Near-Optimal in finite time:** In combinatorial optimization, constant factor approximation guarantees and those matching lower bounds (as is the case in our paper) in particular are referred to as near-optimal. High-probability guarantees are standard in the literature when working with GPs (GP-UCB[17], Kernelized Bandit [16]) due to the unbounded support of Gaussians. \n\n**Safe path planning and scaling of the algorithms:** Please see the common response at the top.\n\n**Objective of the problem:** Both fixed-budget (the problem suggested by the reviewer) and fixed-confidence (the problem we address in the paper) are widely studied settings in the bandit literature and both have relevant applications (see “Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence” by Gabillon et al.2012). We believe that a fixed-budget algorithm for this problem could be an interesting research direction and we will clarify this in the paper. \n\n**Why PassiveMac perform better initially?:** It is intuitive that PassiveMac performs better than SafeMac initially, in many cases. PassiveMac focuses on high-objective values and converges quickly to a local optimum as it does not actively explore the constraints. In contrast, SafeMac focuses on samples that enlarge the safe set but may not have a high value initially. However, this exploration of the constraints, independent of the objective value pays off in the long run as SafeMac explores the safe set to discover a better solution.\n\n**Number of samples: In fig 3c,** The X-axis represents the sum of both the density and the constraint samples. Nearly 800 corresponds to the total samples in the longest episode of the two-stage algorithm, whereas SafeMac converges early (<600 samples, Fig 3b show the relative samples). Moreover, the number of samples required is determined by the shape of the constraint function, the required accuracy, and the noise.\n\n**Questions:**\n- Yes, under the assumption of conditionally linear, monotone and submodular function, the problem is still NP-Hard.\n- The brute force approach of enumerating all the solutions is intractable for all but trivial problems. We meant under reasonable complexity-theoretic assumptions, there is no polynomial time algorithm that can give the solution with a known density.\n\n[11] is published in NeurIPS 2019; apologies that the .bib of the paper didn’t include it.",
" We are thankful to the reviewer p24T for their time and constructive feedback. We are glad that the reviewer recognises our formulation, performance guarantees, and algorithms to be novel and written in a solid style.\n\n**NP-hard problem:** We need to approximately solve the NP-hard problem of maximizing a monotone submodular function under cardinality constraints as a sub-routine at each iteration. To this end, we use the greedy algorithm that is guaranteed to recover a solution within a factor 1-1/e from the optimal one [12], which is provably the best approximation we can obtain in polynomial time [46]. Therefore, we do address this problem directly by leveraging existing results. \n\n**Function regularity:** Our method relies on the idea that safety in one location is predictive of the safety of nearby ones. Therefore, SafeMac would struggle in environments where agents may go abruptly from a very safe region to a very unsafe one. E.g., if we define safety in terms of the steepness of the terrain explored by a rover (see [11, 22, 58]) the agents may go from 0 (safe) to very high steepness (unsafe) if there is a cliff. In such cases, the agent may wrongly predict the safety of nearby locations.\n\nThank you very much for pointing us to caption clarity and grammatical errors, we have included all of them in the paper.",
" We thank reviewer 9Vst for the time and the insightful feedback. \n\n**Known kernel:** Kernel functions express our prior belief about the unknown density and constraint. If domain knowledge is available, it can be readily incorporated in them. If this is not the case, one can use conservative kernels that capture large classes of functions (e.g. Matern with low \\nu and small lengthscales). There is a trade-off: the more conservative the kernel the higher the likelihood that the function we are modelling is in the corresponding RKHS but the less efficient the exploration. In the literature [11,22,23,57,58], it is common to assume that domain knowledge is used to select a kernel that strikes the right trade-off. Thanks for bringing this up, we will elaborate more on this in the final version of the paper.\n\n**Dynamics constraints and Path planning:** Please see the common response at the top.\n\n**Scaling:** Please see the common response at the top.\n\n**Modelling using GP:** If the prior is wrong (refer to known kernel above), modelling of constraints and density might not be error-free. However, sample efficiency, uncertainty quantification, and analysis still make GP a very suitable choice.\n",
" Two algorithms to deal with a map covering problem where there are unknown obstacles is proposed. The first algorithm is a single agent algorithm in which it needs to deal with the exploration-exploitation dillema because of the partial observability of the environment. \nThe second algorithm, extends the first algorithm by considering multi-agent settings in which the exploration as well as the safe coverage is a chanllenge. It is shown that the second algorithm obtain near optimal coveragw while guarantees the safety. Same as below Q0- Would your algorithm works OK with larger number of agents? For example [3] and [4] run problems with 20 and 10 agents, respectively. Is it possible for you to run a problem with a same or bigger size? \nSimilarly, [5] has a 4 agents problem and the analyzed environment is also pretty similar to yours. That also could be a good benchmark. \n\nQ1- I have seen several RL and MARL algorithm to deal with this problem. Even though you propose a sublinear cumulative regret, but still you do not know if the RL based algorithm can achieve the same performance or not on a same problem. I think an interseted reader would like to see the comparisons of these two classes of algorithms. \n\n[1] A reinforcement learning‐based approach for modeling and coverage of an unknown field using a team of autonomous ground vehicles\n[2] Adepegba, Adekunle A., Suruz Miah, and Davide Spinello. \"Multi-agent area coverage control using reinforcement learning.\" The Twenty-Ninth International Flairs Conference. 2016.\n[3] Ye, Zhenhui, et al. \"Multi-UAV Navigation for Partially Observable Communication Coverage by Graph Reinforcement Learning.\" IEEE Transactions on Mobile Computing (2022).\n[4] Gosrich, Walker, et al. \"Coverage Control in Multi-Robot Systems via Graph Neural Networks.\" 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022.\n[5] Battocletti, Gianpietro, et al. \"RL-based Path Planning for Autonomous Aerial Vehicles in Unknown Environments.\" AIAA AVIATION 2021 FORUM. 2021. \n[6] Din, Ahmad, et al. \"A deep reinforcement learning-based multi-agent area coverage control for smart agriculture.\" Computers and Electrical Engineering 101 (2022): 108089.\n[7] Faryadi, Saba, and Javad Mohammadpour Velni. \"A reinforcement learning‐based approach for modeling and coverage of an unknown field using a team of autonomous ground vehicles.\" International Journal of Intelligent Systems 36.2 (2021): 1069-1084.\n Same as above ",
" The paper proposes a multi-agent coverage algorithm for a 2D environment with a priori unknown unsafe regions and noise density estimates that is guaranteed to be within a constant factor away from the optimal solution under certain assumptions (e.g. well behaved density and safety functions). The strength of the paper is a novel formulation of the exploration vs exploitation problem that takes into account agent safety in a multi-agent setting as well as the performance guarantees obtained. Although the algorithms and analysis built on previous work (e.g. [11,16,23]), I thought the work to be sufficiently novel.\n\nThe performance guarantee (Theorem 2) states that by doing enough iterations (i.e. sampling as a function of epsilon and delta), one can guarantee that the solution quality gets arbitrarily close (i.e. epsilon) to within a constant factor of 1-1/e of the optimum [10] with probability 1 - delta. Hence, the claim in the abstract of \"near optimal coverage in finite time\" seems overstated. In fact, when running the algorithm for a finite time, with probability delta > 0 one may end up worse than within epsilon from the constant factor approximation. Moreover, even if running the algorithm indefinitely, one can end up 1/e which corresponds to about 37% away from the optimum, which is good, but may not be considered as \"near-optimal\" by everyone.\n\nThe main weakness of the paper is that:\n(i) path planning of individual agents is not considered (samples are taken from regions of sufficiently safe cells, hence, ignoring the time and risks that accumulate for an agent as it travels through this region);\n(ii) it is not clear to what extent the proposed methods scale as only K=3 agents are considered and the environment is relatively small considering the intended applications (30 x 30 cells) albeit it is not a toy problem. Given much prior work exists for K=1, I would have appreciated an evaluation of the methods (that specifically address the multi-agent scenario) in more complex settings.\n\nThe paper claims that constraints prevent \"agents from monitoring from unsafe locations\" and agents \"may be unable to safely reach a disconnected safe area\" starting from their initial locations. The problem statement assumes that agents can only monitor locations that they can (in principle) reach, as they have to be part of the subset of the largest \"safely reachable region\" (see equation 2). This prevents an agent from monitoring say the other side of a river if the river cannot be safely crossed, irrespective of their sensing range. Although reasonable (to not overly complicate the problem statement), this assumption should be explicitly acknowledged, as the optimality guarantees are not correct otherwise.\n\nThe problem statement is challenging to follow, given there are a number of inter-connected aspects. Up to equation (2) it is rather straightforward, however, given uncertainties are introduced regarding the density and safety it is less clear thereafter what is actually the objective. For example, if the objective was to obtain the best possible result over a fixed duration - which may be realistic objective in practice - than the methods would not necessarily deliver, as the solutions presented do not seem to be anytime algorithms. Instead, the objective seems to be provide a solutions however long it takes (see also minor suggestions below).\n\nA few assumptions may make it harder to implement the method in practice, including the need for a centralised optimisation and synchronous measurements as well as the functions needed to be continuous (if they are not, the advantage of the method may be reduced). Also, looking at Figure 3(c), it seems there is a certain window (in terms of number of samples) where SafeMac excels whereas if there are fewer samples the method is clearly outperformed by PassiveMac, and as the number of samples approach 800 (which is almost 900 = one sample per cell) I wonder whether other solutions exists that perform equally well or better.\n\nI am not sure if Algorithms 1 and 4 are fully correct. In Lines 3 and 5, respectively, initially when t=0, they probe w_0, which is not initialised. Should this be w_1?\n\nCorollary 1 seems to suggest that choosing t_p^* = 0 always works. I think it should be >= not <=.\n\nIn Section 5 (Analysis), you state \"Since control coverage consists in maximizing a monotone submodular function, we cannot compute the true optimum even for known densities\". Why is this the case (in general, that is, non-greedy approaches)? Could one not simply enumerate through all solutions?\n\nI could not find a definition of L_q (input of Algorithm 4).\n\nMinor comments:\n- When talking about approximations of the feasible sets, are the max and min operators over x?\n- You first refer to Algorithm 2, then to Line 4 in Algorithm 1, and then you write \"Now, we introduce MACOPT (Algorithm 1)\", which is not ideal.\n- Where is [11] published?\n- In Theorem 2, you start with \"Let delta be element of (0,1)\", but the same could be said for epsilon, as you chose to use 2 variables instead of one.\n- \"a expander\" Would the problem under the assumption of conditionally linear, monotone and submodular functions still be NP hard?\n\nI am not sure if Algorithms 1 and 4 are fully correct. In Lines 3 and 5, respectively, initially when t=0, they probe w_0, which is not initialised. Should this be w_1?\n\nCorollary 1 seems to suggest that choosing t_p^* = 0 always works. I think it should be >= not <=.\n\nIn Section 5 (Analysis), you state \"Since control coverage consists in maximizing a monotone submodular function, we cannot compute the true optimum even for known densities\". Why is this the case (in general, that is, non-greedy approaches)? Could one not simply enumerate through all solutions?\n\nHow would the method scale with the number of agents (K beyond 3)? Not applicable",
" This paper looks into multi-agent coverage control (MAC) problems, in which multiple agents coordinate and navigate the environment to maximize the coverage of some densities. It aims at learning the density to solve the problem while ensuring the safety of the agents. It formally defines the safety-constrained multi-agent coverage control problem. The MAC task is modeled as a conditionally linear coverage function that possesses monotonocity and submodularity. A single-agent safe exploration algorithm named GOOSE is introduced. Using GOOSE as the intuition, the authors propose MACOPT, an unconstrained multi-agent coverage control algorithm, and SAFEMAC, a safety-constrained multi-agent coverage control algorithm that is created by extending GOOSE to multi-agent cases and combining it with MACOPT. Then they prove the convergence of MACOPT and the optimality and safety properties of SAFEMAC. Finally, the paper discusses how MACOPT and SAFEMAC are superior compared to existing methods by doing experiments in synthetic and real-world applications. It shows that SAFEMAC obtains better solutions than algorithms that do not actively explore the feasible region and that it has higher sample efficiency than competing near-optimal safe algorithms. This paper introduces a single-agent safe exploration algorithm (GOOSE) as the background, extends it to a multi-agent version, and proposes two algorithms for unconstrained and safety-constrained multi-agent coverage control problems. It starts from a well-known method and then presents novel approaches inspired by it, which demonstrates good originality.\n\nThe submission is written in a very solid style. The problem statement and definitions are formal, and the pseudocodes for MACOPT and SAFEMAC are displayed in detail. The paper analyzes MACOPT ’s convergence and SAFEMAC ’s optimality and safety properties using theorems and mathematical proofs. The statements are accurately written and rigorously proved (the full derivation is included in the appendix). The structure of the main paper is clear and organized. The authors demonstrate that multi-agent coverage control tasks are a class of difficult problems, especially when safety needs to be guaranteed, and their new methods provably address the tasks more efficiently than previous works.\n\nOn the other hand, the description of the figures should be improved. In the caption of Figure 1, it says \"Agent 1 covers $D^1$ (green), 2 covers $D^{2-}$ (orange) and 3 covers $D^{3-}$ (yellow).\" However, I cannot see where $D^1$, $D^{2-}$ and $D^{3-}$ are labeled in Figure 1(a). It also says \"In b) ... in the optimistic set\", but this set is not marked and its color is not specified in Figure 1(b) either.\n\nAnother side note is about the grammar problem. For instance, in line 108, \"... a the positive definite kernel matrix ...\" should be \"... the positive definite kernel matrix ...\" and in line 265, \"... such 1D-Lidars.\" should be \"... such as 1D-Lidars.\"\n\n In the Introduction section, the authors mentioned that deploying coverage control solutions in the real world presents many challenges and no prior work addressed them jointly. One of the challenges is that multi-agent coverage control with known densities is an NP-hard problem. Has the paper tried to address this specific problem? \n\nIn the Conclusion section, the authors noted that although in many real-world applications the density and the constraints are as regular as assumed in the paper, in some they are not. Is there a specific example in which the algorithms do not apply well? The authors specified that the limitation of the paper is that the proposed algorithms choose informative targets without planning informative trajectories, which can be crucial in the research of robotics. Besides, in some real-world applications, the density and the constraints may not be the same as assumed in the paper. In these cases, the algorithms no longer have optimality and safety guarantees. It is not likely that this paper will cause any potential negative social impact.",
" The paper presents an algorithm for safe coverage and exploration with multiple robots. Particularly, the paper focuses on an active information gathering problem where multiple mobile robots choose how to move around to explore the environment and maximize its coverage. Coverage is modelled through a spatial Gaussian process density function. Exploration is necessary since the density is unknown a priori. Safety constraints are also considered (the robots cannot access all locations in the environment), which are also assumed unknown a priori. The constraints are also modeled via Gaussian Processes. The paper provides an algorithm that guarantees (i) near-optimal coverage and (ii) satisfaction of the safety constraints. The algorithm is evaluated on a synthetic and two real world applications: safe biodiversity monitoring and obstacle avoidance. Strengths\n(+) Proposed algorithm guarantees sublinear regret and safety\n(+) Algorithm’s effectiveness is illustrated in simulations\n\nWeaknesses\n(-) Density and constraints have a known structure, being modeled as Gaussian Processes given kernel functions. I would elaborate on how the chosen kernel functions are known a priori.\n(-) The robots are allowed to jump from any point in the safety set to any other.\n(-) The experiments consider robot teams of only 3 robots.\n 1. I would elaborate on how the chosen kernel functions are known a priori.\n2. I would elaborate on how dynamics constraints are considered. For example, can robots move arbitrarily within the safety sets?\n3. I would elaborate on how scalable the proposed method is. What is its computational complexity? Up to how many robots can it scale?\n 1. The paper discusses in the conclusion that the robots currently are allowed to jump from any point in the safety set to any other; the conclusion states that future work will address the problem of planning feasible trajectories.\n2. I would elaborate on the limitations of modeling the density and safety constraints as Gaussian Processes.\n3. I would elaborate on how scalable the proposed method is. \n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
4,
5
] | [
"D-uIlgUL9jh",
"-dxL5SZNZu-",
"SK2RmD-d_l",
"AETSVcVbsNK",
"Elm2AwFgU_U",
"nips_2022__L7f0ySKMWY",
"nips_2022__L7f0ySKMWY",
"Gtj1-QZAA5",
"bWFSOUxjnLe",
"ifyqmMr8mFh",
"pzo92J0BzW",
"nips_2022__L7f0ySKMWY",
"nips_2022__L7f0ySKMWY",
"nips_2022__L7f0ySKMWY",
"nips_2022__L7f0ySKMWY"
] |
nips_2022_AhbTKBlM7X | Learning State-Aware Visual Representations from Audible Interactions | We propose a self-supervised algorithm to learn representations from egocentric video data. Recently, significant efforts have been made to capture humans interacting with their own environments as they go about their daily activities. In result, several large egocentric datasets of interaction-rich multi-modal data have emerged. However, learning representations from videos can be challenging. First, given the uncurated nature of long-form continuous videos, learning effective representations require focusing on moments in time when interactions take place. Second, visual representations of daily activities should be sensitive to changes in the state of the environment. However, current successful multi-modal learning frameworks encourage representation invariance over time. To address these challenges, we leverage audio signals to identify moments of likely interactions which are conducive to better learning. We also propose a novel self-supervised objective that learns from audible state changes caused by interactions. We validate these contributions extensively on two large-scale egocentric datasets, EPIC-Kitchens-100 and the recently released Ego4D, and show improvements on several downstream tasks, including action recognition, long-term action anticipation, and object state change classification. | Accept | The paper presents self-supervised representation learning from egocentric video data. The reviewers unanimously support the paper. Although WY3G has not updated the rating, the reviewer commented that s/he is upgrading the rating to Weak Accept, making the paper get three unanimous Weak Accept ratings. All three reviewers find the idea of using audio to identify the state changes for learning audio-visual correlation interesting. The authors are encouraged to include added experimental results they provided during the discussion phase to the final version of the paper.
| train | [
"exX0UhsTzwB",
"ywrlXmcFs9h",
"QJ7JhZAfTRP",
"pw36liRg6Kz",
"1gpPLgnfR-E",
"5hqdfYMZOVe",
"p3UGij_9XJQ",
"WS5AoYHOJWx",
"IF0sazx3LWg",
"YIp7RRNzNuTW",
"fPRbj-_oF2r",
"CVgnH9x-MSv",
"yt1PbmmLAWx",
"hn18lMTENm",
"_cGP3J-mMxi",
"2povrMJwVGS",
"znwbZoJKoWr",
"faC_Uiks94G",
"O6KIDaZsuRb"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. \n\nI agree with the other two reviewers that the comparison with other state-of-the-art methods is not sufficient in the original submission and the additional comparisons provided should be added into the paper. I agree with Reviewer coBj that some examples should also be provided in the paper for understanding the method. \n\nI agree with Reviewer WY3G that the generalization ability of the method is not convincing, the authors' response has partly addressed my concern, but I think it is acceptable that this is left for future works. And I agree with Reviewer coBj that the idea of using audio to identify the state changes for learning audio-visual correlation is interesting. I incline to keep my acceptance rating and hope the source code can be released for future works. ",
" Thanks a lot for your efforts, sharing constructive feedback, and post-rebuttal response.",
" Dear authors,\n\nThank you for your detailed and informative reply. Most of my concerns are now resolved and I believe the paper should be accepted in the conference.\n\nBest wishes.",
" From the review, we believe that the main concern raised is the generalization ability of the proposed method (for example, how to deal with actions _without strong temporal order_ like stirring). We addressed this concern by conducting a thorough analysis of the impact of different types of interactions on the learned representations. We add three new analysis and corresponding insights, particularly: \n\n(A). **Mean average precision among three types of interactions:** This shows that our model improves over baselines, regardless of interaction type.\n\n(B). **Norm of the visual state change around an MoI**: Regardless of the interaction type, we show bigger changes in visual states occur at the detected MoIs than at uniformly random timestamps. This shows that even for interactions without strong temporal order, our model can focus on visual cues that do change over time.\n\n(C). **Association score between the audio and the visual state change:** This shows that AStC is able to associate (better than prior work) the audio with the visual state change representation, even for interactions without strong temporal order.\n\nWe believe we concretely address the main concerns shared in the review. Is our response (and planned changes to the manuscript) satisfactory? If not, please let us know how we can better address these concerns.\n\nThank you so much for sharing constructive feedback to help improve our manuscript!\n",
" From the review, we believe that there were four main concerns preventing the reviewer from providing a higher score. In our rebuttal, we directly address each of these concerns. Is our response (and planned changes to the manuscript) satisfactory? If not, please let us know how we can better address these concerns.\n\n**Concern #1:** AStC may not be necessary since AVC can already learn representations aware of changes over time if associated with different sounds (eg different representations for opening and closing a fridge)\n\n**Response:** To address this concern, we pointed out that, unlike AStC, AVC provides _no incentive_ for the model to learn the temporal change in the visual state. This information allows the model to learn whether a clip is from the beginning or the end of the action, and thus encourages the model to learn more state aware representations. The complementarity between AVC and AStC is also experimentally validated throughout our experiments.\n\n**Concern #2:** Learning from MoIs may not be appropriate, as audio changes can be caused by more than human-object interactions\n \n**Response:** To address this concern, we clarified that MoI should be thought of as _any form of interaction_ that leads to a perceptible visual state change (Lines 53-54, 135-137), since the AStC loss can learn from any such interactions. _MoIs are not restricted to just human-object interactions_, and can be associated with any form of interaction within the environment. We also experimentally showed the benefits of MoI detection for several downstream tasks on realistic datasets like EPIC-Kitchens and Ego4D.\n\n**Concern #3:** AStC may not be general enough, since it may not work with interactions that are irreversible (like cutting vegetables) or interactions that cause no visual changes (like stirring a pot)\n\n**Response:** To address this, we add a new analysis on the impact of our method on the representations for three types of interactions. We measured how our discriminative features (using mean average precision), show improved performance compared regardless of action type. Using metrics like the norm of visual state changes, we show that even for an interaction like stirring a pot, the model can find visual cues that do change over time, indicating that this potential failure mode of AStC is more rare than one might expect.\n\n**Concern #4:** Fine-tune AVID baseline on egocentric datasets for a fair comparison\n\n**Response:** We include this additional run in our response. The same trends hold and this baseline helps strengthen the results.\n\nWe thank the reviewer again for their time and effort towards improving the manuscript.\n\n",
" From the review, we believe that the main concerns were to:\n\n(1) consider baselines beyond AVID\n\n(2) evaluate the ability to detect moments of interaction (MoI)\n\nWe addressed (1) by providing new results on EPIC-Kitchens for the suggested XDC baseline, as well as, RepLAI with an XDC initialization. We will include additional results on the Ego4D dataset with more time to revise.\n\nWe addressed (2) by presenting a new analysis that measures the norm of the changes in visual representation. We show bigger changes in visual states occur at MoIs that we detect as compared to uniformly random timestamps. We point to further results in the main paper that show the impact of MoI on representation quality and downstream performance across multiple tasks.\n\nWe believe that we concretely address the main concerns shared in the review. Is our response (and planned changes to the manuscript) satisfactory? If not, please let us know how we can better address these concerns.\n\nThank you so much for your efforts to help improve our work!",
" Dear reviewers, \n\nWe thank you once again for the thorough consideration of our work. With the rebuttal/discussion period nearing a close, we would like to hear from reviewers on whether our rebuttal addressed your concerns. We are eager to clarify any remaining questions, help quell any remaining concerns, and more importantly, look for ways to improve the paper overall.",
" > The idea is pretty close to \"Actions ∼ Transformations\" which unfortunately is missing from the references.\n\nWe thank the reviewer for bringing this paper to our attention. We will duly cite this paper as they learn from the same concept of visual state change. Although the underlying concept -- transformation or change in the environment caused by an action -- is similar to ours, the settings, the algorithm and the overall contributions are quite different from our work. \n\n- We focus on the *multi-modal setting*, which has two advantages: 1) state changes are often indicated by audio and can be therefore better localized and 2) the audio itself is often discriminative of what type of state change took place.\n\n- Unlike ”Actions~Transformations”, we focus on the *self-supervised* representation learning problem. As a result, we cannot leverage class information to learn a class specific transformation between the before and after states.\n\n- “Actions ∼ Transformations” takes the simplifying assumption of working with trimmed videos, while we consider the more challenging (and realistic) setting containing arbitrarily long, untrimmed videos of daily activities.",
" We thank the reviewer for their valuable feedback and suggestions. \n\n> My main question is the generalization ability of the proposed method...It can work well for activities like opening/closing the door intuitively. However, how can it work for others like stirring and washing?\n\nTo provide further insights onto the generalization ability of the proposed method, we conduct an experiment to assess how discriminative the learned representations are for different types of activities. For this experiment, we first categorize the activities based on the nature of the transition: \n\nT1) “irreversible” audible interactions with backward transition never happening (e.g., cutting vegetables); \nT2) “reversible” audible interactions with backward transition likely to happen (e.g., open/close fridge); \nT3) audible interaction with no transition direction (e.g., stirring).\n\nClearly, AStC can learn from both T1 and T2 interactions, as they are associated with visual state changes. Although T1 interactions are never seen in reverse order, the model still benefits from knowing what is the correct order, as this leads to more state-aware representations. As for T3 type interactions, they can indeed be a failure mode of the AStC objective, if they cause NO change in the visual state of the environment. \n\nHowever, we highlight that ALL self-supervised objectives have failure modes. For example, the AVC objective cannot learn to associate audio and video if the audio is silent. Even standard image-based contrastive frameworks like MoCo or SimCLR cannot learn when the sampled crops do not capture parts of the same object. In fact, self-supervised pre-training is not meant to be perfect. It is only meant to provide a proxy for learning representations, that can be tuned on a downstream task. Despite not being able to learn from T3 type interactions, AStC significantly improves the learned representations when considered in conjunction with AVC (for example, see the gains between rows 4 and 7 of Tables 1 and 2).\n\nTo provide further insights, we computed the mean average precision for each type of interaction, after training for the downstream action recognition task. We also compared this metric using both the finetuned AVID baseline and RepLAI models. The results, shown in the table below, indicates the RepLAI performs significantly better than the baseline across all categories of transition/direction, showing that RepLAI (which includes both AVC and AStC) is generic enough to enhance representations for all types of interactions.\n\nMean average precision score:\n| Method | T1 | T2 | T3 |\n|------------------ |:-----: |:-----: |:-----: |\n| AVID (Finetuned) | 34.50 | 22.80 | 10.64 |\n| RepLAI (Ours) | **46.22** | **29.47** | **14.78** |\n\nFurthermore, we observed that MoI detection helps finding timestamps that have more perceptible visual state change (even for T3 type interactions). To see this, we computed the norm of the visual state change $\\|\\|f_v(v_{t-\\delta})-f_v(v_{t-\\delta})\\|\\|_2$ around MoIs and around randomly chosen timestamps. The results, shown below, confirm this claim.\n\nNorm of visual state change:\n| Method | T1 | T2 | T3 |\n|----------------------------- |:----: |:----: |:----: |\n| Random location | 2.77 | 2.76 | 2.61 |\n| Moment of Interaction (MoI) | **3.19** | **3.17** | **3.01** |\n\n\nFinally, we also measured how well the AStC loss learns the association between the audio and the visual state change in the forward direction. Specifically, we calculated the average similarity $ sim ( \\Delta v_t^{frwd} , a_t)$ within each of the three categories (T1, T2, T3). The table below shows a comparison of this forward association score between our approach (RepLAI) and the AVID baseline. As expected, RepLAI learns better associations between the audio and visual state changes than AVID. More importantly, despite being are harder to identify, RepLAI still performs relatively well among T3 type interactions. This shows that, even for actions like washing and stirring, there are still slight visual state changes that the model can learn.\n\nAverage similarity:\n| Method | T1 | T2 | T3 |\n|--------------- |:-----: |:-----: |:-----: |\n| AVID | 0.364 | 0.357 | 0.346 |\n| RepLAI (Ours) | **0.782** | **0.764** | **0.646** |\n\n> There is lack of fully-supervised performance for comparison in experiments.\n\nWe already provide some fully supervised results on Ego4D (Table 2, rows S1, S2, S3) and have a discussion on that in Lines 325-329. We acknowledge reviewer’s suggestion and will provide additional supervised results in Table 1, using the same R(2+1)D-18 architecture pre-trained on Kinetics-400. Unfortunately, we were not able to finish this result during the rebuttal period.\n",
" > How do you handle the audio diversity of semantically similar interactions? For example, “cutting cucumber” on a wooden board will sound different from doing it on a plastic board, or sound of “sautéing” mushrooms/onions in a pan is meaningfully influenced by the oven/pan/oil temperature.\n\nFrom an audible state-change (AStC) perspective, we expect that the visual representations of cutting vegetables to be similar i.e. invariant to the type of cutting board. This is because the only visible change is ‘whole vegetable’ to ‘cut vegetable’ and the cutting board is unaffected by this visual state change.\nHowever, from an audio-visual correspondence (AVC) perspective, we expect that the different sounds produced by the wooden vs plastic boards would allow the model to develop visual features that are discriminative of the type of cutting board. \nHowever, since AVC and AStC operate on different projections of the audio and visual space, these two goals do not contradict each other.\n\n> As part of ablation studies, have you tried a linear head for $h_{\\Delta V}^{AStC}$ since forward delta seems to be the negative of the backward one.\n\nWe acknowledge reviewer's suggestion and train RepLAI with a linear head for the visual state change projection. The results are shown in the table below where we compare RepLAI trained with linear head and RepLAI trained with non-linear head (ours). From the table, we can observe that although the difference between the two settings is small, the non-linear head design performs slightly better.\n\n| | Top-1 Acc | | Top-5 Acc | |\n|----------------------- |----------- |------- |----------- |------- |\n| Method | Verb | Noun | Verb | Noun |\n| RepLAI w/ linear head | 31.02 | 10.85 | 73.12 | 30.08 |\n| RepLAI (Ours) | **31.71** | **11.25** | **73.54** | **30.54** |\n\n> AVC loss shown in Figure 3.b encourages the embedding of $v_{t-\\delta}$ to be close to $v_{t+\\delta}$ by both anchoring on audio embedding at t while the AStC encourages them to be different. These two objective functions, to the best of my understanding are pulling in opposite directions!\n\nThis is a misunderstanding. AVC and AStC do NOT contradict each other, as they operate on different feature spaces. As we showed in Fig 3, the representations that are matched in the AVC and AStC objectives are obtained from *different* projections of the underlying representations $f_V(v)$ and $f_A(a)$, ie, using different projection heads $h^{AVC}$ and $h^{AStC}$. Note that the different projection heads avoid this contradiction, as the model can dedicate part of the $f_V(v)$ representation to be invariant to state-changes (and thus good for AVC), and another part to be discriminative of state changes (to satisfy AStC).\n\n> Would not it make more sense to choose the audio at $t-\\delta$ and $t+\\delta$, instead of $t$, when computing two AVC losses?\n\nIt wouldn’t make much difference as audio clips are relatively large (compared to $\\delta$), and so both audio clips would overlap significantly. Having said that, this implementation does make perfect sense, and we did consider it. However, given that using a single audio clip at time t does NOT impose any limitation (as described in the answer to the previous comment), we decided to go with this implementation to reduce computation (ie., to reduce the number of forward and backward passes required at each iteration).\n\n> In Table 1, Top 5 Acc: AVC seems to be doing most of the job while w/o AStC and MoI, performance on “verb” is almost maintained (~73%) but the pattern is different for “noun”. Any insights?\n\nIt is difficult to make a concrete remark here, especially because of the broad nature of the top-5 metric as well as the head trained after self-supervised pre-training. It is likely that although having five predictions reduces the difficulty of classifying certain competing/similar verbs, it does not help in distinguishing similar noun classes. ",
" > I am not convinced that learning from audible state change is generic enough. I would like to hear authors' feedback on different types of interactions and why their proposed model should work in a self-supervised setup where we don’t know which type of these interactions are included in the training data.\n\nTo provide further insights on how AVC and AStC learn from different types of interactions, we categorize all actions in Epic-Kitchens into three types: \nT1) “irreversible” audible interactions with backward transition never happening (e.g., cutting vegetables); \nT2) “reversible” audible interactions with backward transition likely to happen (e.g., open/close fridge); \nT3) audible interaction with no transition direction (e.g., stirring).\n\nAStC naturally learns from both T1 and T2 interactions, as they are associated with a visual state change (the reviewer seems to suggest that AStC can handle T2 but not T1). Although T1 interactions are never seen in reverse order, the model still benefits from knowing what is the correct order, as this leads to more state-aware representations. As for T3 type interactions, they can indeed be a failure mode of the AStC objective, if they cause __no change__ in the visual state of the environment. \n\nHowever, we highlight that ALL self-supervised objectives have failure modes. For example, the AVC objective cannot learn to associate audio and video if the audio is silent. Even standard image-based contrastive frameworks like MoCo or SimCLR cannot learn when the sampled crops do not capture parts of the same object. In fact, self-supervised pre-training is not meant to be __perfect__. It is simply meant to provide a pretext task for learning representations, that can be adapted (via sample-efficient tuning) on a downstream task. Despite not being able to learn from T3 type interactions, AStC significantly improves the learned representations when considered in conjunction with AVC (for example, see the gains between rows 4 and 7 of Tables 1 and 2).\n\nTo provide further insights, we take the mean average precision score for each type of interaction, after training for the downstream action recognition task. We also compared this metric using both the finetuned AVID baseline and RepLAI models. The results, shown in the table below, indicates the RepLAI performs significantly better than the baseline across all categories of transition/direction, showing that RepLAI (which includes both AVC and AStC) is generic enough to enhance representations for all types of interactions.\n\nMean average precision score for three types of interaction:\n\n| Method | T1 | T2 | T3 |\n|------------------ |:-----: |:-----: |:-----: |\n| AVID | 34.50 | 22.80 | 10.64 |\n| RepLAI (Ours) | **46.22** | **29.47** | **14.78** |\n\nFurthermore, we observed that MoI detection helps finding timestamps that have more perceptible visual state change (even for T3 type interactions). To see this, we computed the norm of the visual state change $\\|\\|f_v(v_{t-\\delta})-f_v(v_{t-\\delta})\\|\\|_2$ around MoIs and around randomly chosen timestamps. The results, shown below, confirm this claim.\n\nNorm of visual state change for three types of interaction:\n\n| Method | T1 | T2 | T3 |\n|----------------------------- |:----: |:----: |:----: |\n| Random location | 2.77 | 2.76 | 2.61 |\n| Moment of Interaction (MoI) | **3.19** | **3.17** | **3.01** |\n\n> It is not fair to compare rows 2 and 7 since row 2 has been only trained on AudioSet which is very different from evaluation egocentric datasets. AVID should be fine-tuned on the egocentric datasets.\n\nWe acknowledge reviewer's suggestion and finetune AVID on Epic-Kitchens using only AVC (i.e., RepLAI w/o AStc and MoI). The results for this setup are tabulated in the table below, alongside other settings from Table 1. We observe that finetuning AVID on the egocentric data does improve performance, but it is still significantly behind the proposed RepLAI method. We will add these results to the paper, as well as the equivalent results on Ego4D~(which we couldn't finish due to time constraints).\n\n| | Top-1 Acc | | Top-5 Acc | |\n|------------------ |:---------: |:-----: |:---------: |:-----: |\n| Method | Verb | Noun | Verb | Noun |\n| AVID | 26.62 | 9.00 | 69.79 | 25.50 |\n| AVID (Finetuned) | 27.25 | 9.10 | 69.94 | 26.14 |\n| RepLAI w/o AStC | 29.29 | 9.67 | 73.33 | 29.54 |\n| RepLAI w/o MoI | 28.71 | 8.33 | 73.17 | 27.29 |\n| RepLAI (Ours) | **31.71** | **11.25** | **73.54** | **30.54** |",
" > In nutshell, change in audio is not only as a result of human-object interaction, it can be due to change of location or variant environment as well.\n\nIn Lines 53-54 and Lines 135-137, we intend to convey that an MoI can be any form of action or interaction among the entities in the environment that leads to a perceptible visual state change. Thus, MoIs are not restricted to just human-object and object-object interactions and can be associated with any form of interaction within the environment. We hypothesize that whenever there is such a visual state change, there is a high probability for this change to be accompanied with a distinct/characteristic audio pattern (Lines 53-54) leading to an MoI detection. Such a formulation of MoI enables our method to focus on timestamps in the video that provide relatively richer audio-visual signals for the model to learn meaningful feature representations that understand both visual states and temporal change in visual states. Note that we purposefully intend to detect generic enough MoIs to accommodate the unconstrained and unlabeled nature of the training data, and prevent being restricted to a closed set of human interactions.\n\nWe agree with the reviewer that there will be a change in the audio in the reviewer’s example when a person walks to the living room and there is a fan or stereo playing. As a result, MoI detection will indeed pick this timestamp. But given the above mentioned definition, we posit that this change in audio is a valid MoI, as it is associated with the person changing its location (an outcome of the interaction of the person with their environment). This will be accompanied by a significant change in the visual state of the environment along with a distinct audio pattern. We will paraphrase Section 3.2 to elucidate this definition of Moment of Interaction (MoI).\n\n> I cannot see how MoI detection works in a realistic environment.\n\nBoth Ego4D[1] and Epic-Kitchens[2] are challenging, realistic environments and have portions of video that do not have clean audio. Ego4D consists of several daily activities in the home, workplace, social setting, leisure and commuting which are unscripted and ``in the wild'' that represent natural interactions (Page 2 of Ego4D [1]). Epic-Kitchens also includes long-term, unscripted videos of human-object interactions with everyday objects in a kitchen (Page 2 of Epic-Kitchens [2]). The improved performance over multiple downstream tasks~(Table 1, 2) achieved by detecting MoIs indicates that the proposed approach is generic enough to work in a realistic environment.\n\n[1]. Grauman, Kristen, et al. \"Ego4d: Around the world in 3,000 hours of egocentric video.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[2]. Damen, Dima, et al. \"Rescaling egocentric vision.\" arXiv preprint arXiv:2006.13256 (2020).",
" We thank the reviewer for their valuable feedback and suggestions. \n\n> Authors argue that the AVC objective leads to representations that are not informative of the changes over time…Can the authors clarify their point on the lack of AVC’s suitability for the task specially given the ablation studies that show dropping AVC vs AStC is not that much different.\n\nWhile AVC helps to learn effective correspondence between the visual states and their associated audio, it is invariant to the visual state change by design. So, although AVC can learn a discriminative representation for “opening fridge” (which is indeed different from “closing fridge”), it provides NO incentive for the model to learn the higher-order information of temporal change in the visual state. This information is necessary for the model to build a temporal understanding of whether we are at the beginning of the action (e.g., holding the handle to open the fridge) or at the end of the action (e.g., opening the fridge to grab something). As mentioned in line 297-305, our newly introduced AStC loss complements and mitigates this shortcoming of AVC by operating directly on the direction of the visual state change. \n\nAStC is designed to make the model learn this temporal change in visual state by correctly associating the forward direction of change ($\\Delta v_t^{\\text{frwd}}$) with the audio (Equation 4) and reducing the association of the backward direction of change ($\\Delta v_t^{\\text{bkwd}}$) with the audio (Equation 5). By directly working on the direction of visual state change ($\\Delta x$) rather than on the visual states ($x$), AStc allows the model to be more aware about the transitions in visual states over the course of any action or interaction in the environment.\n\nWe would like to clarify that we do not suggest the equality of the AVC and AStC terms or the superiority of one over the other. By _complementing_ each other, AVC and AStc when combined together (Row 7 of Table 1 and Table 2 of main paper), RepLAI achieves the best performance across all downstream tasks compared to other settings of not having either AVC or AStc (Row 3, 4 of Table 1 and Table 2 of main paper)\n\n> …given the ablation studies that show dropping AVC vs AStC is not that much different.\n\nThe performances shown, for example, in Rows 3 and 4 of Tab. 1 are indeed similar (29.9% vs 29.3% top1 verb accuracy and 10.5% vs 9.7% top1 noun accuracy), but this is likely just a coincidence. We highlight that AVC and AStC are required to learn very different representations by design - AVC focuses on the direct audio-visual correspondences, and AStC focuses on the direction of visual change. Thus, the fact that AStC alone achieves similar performance to the more common AVC objective in both Tables 1 and 2 already shows the potential of AStC.\n\nThe second important insight is that AVC and AStC are indeed complementary to each other. Their combination (Row 7 of Table 1 and Table 2 of paper) achieves the best performance across all downstream tasks compared to other settings of not having either AVC or AStc. In the absence of either (Row 3 and Row 4), the model learns sub-optimal representations and achieves lower performance. For example, in Tab. 1, the combination yields (**31.7, 11.2**) top1 accuracy for (verbs, nouns), compared to (29.9, 10.5) without AVC. Similar gains are found in Tab. 2 for the Ego4D dataset on different downstream tasks.\n\n\n\n\n",
" > W2. The authors are missing citations for some of the papers mentioned in W1 (e.g. MMV, Evolving Losses).\n\nThank you for pointing out the missing references. We will add them to the revised draft. We agree - MMV and Evolving Losses are important works in the audio-visual self-supervised learning (SSL) literature.\n\n> W4. Could the authors show more examples of moments of interaction? It would be specially interesting to see where the detection of those moments works and where it fails...I understand the reasoning behind the methodology of using the reverse clip as negative, but I believe the readers would benefit from a few visual examples to understand that.\n\nDue to space restrictions, we could only visualize three moments of interaction in the main paper (Fig. 1 and Fig. 4). More examples were included in Supplementary Material (Fig 1-3) which we will definitely expand them in a revised version, including qualitative visualization of reverse clips as negatives. Thank you for this constructive feedback.\nAn intuitive failure mode of a moment of interaction is when an action is not associated with a distinctive audio pattern, such as “look”. We’ll revise and incorporate this.\n",
" We thank the reviewer for their valuable feedback and suggestions. \n\n> Have the authors considered adding additional baselines when comparing to the state-of-the-art? If not, why not?\n\nWe thank the reviewer for the constructive recommendation. We will incorporate the suggestion in the final revision. To do so, we used XDC pre-trained on Audioset (Alwassel et al. 2020) as the baseline. Notably, unlike most other audio-visual self-supervised learning (SSL) methods, XDC’s self-supervised objective is based on deep clustering (most others are contrastive, akin to AVID). Below, we report results on the Epic-Kitchens. Rows (1) and (2) are exactly from the paper. Row (3) shows the performance of the pre-trained XDC model on the action recognition task, and row (4) shows the performance of our RepLAI model using XDC as the initialization (instead of AVID).\n\n| | Top-1 Acc | | Top-5 Acc | |\n|------------------------------------ |:---------: |:-----: |:---------: |:-----: |\n| Method | Verb | Noun | Verb | Noun |\n| (1). AVID (Morgado et al. 2021) | 26.62 | 9.00 | 69.79 | 25.50 |\n| (2). RepLAI w/ AVID initialization | **31.71** | **11.25** | **73.54** | **30.54** |\n| (3). XDC (Alwassel et al. 2020) | 24.46 | 6.75 | 68.04 | 22.71 |\n| (4). RepLAI w/ XDC initialization | 29.58 | 9.62 | 71.87 | 28.05 |\n\nXDC representations transfer slightly worse to Epic-Kitchen than AVID (row 1 vs 3). Our method performs better than XDC by significant margins (row 2 vs 3). RepLAI still makes strong improvements when using XDC as the initialization (row 3 vs 4). These results provide further evidence of the benefits of the proposed approach. In the final revision, we plan to complete the analysis with XDC initialization. We will include results on the Ego4D dataset and tasks.\n\n> The authors do not properly evaluate the ability to detect moments of interaction. What evidence do the authors have that the detection of change works properly?\n\nAs suggested by the reviewer, we try to evaluate the ability of the model to detect moments of interaction. In our work, we show that MoI helps to improve performance by detecting locations in the video that have better perceptible visual state change. We validate this by computing the norm of the difference between the before and after visual state for a detected MoI (averaged over all detected MoIs). A higher visual state change norm indicates that MoI is able to detect locations in the video that have a significant and meaningful visual state change. We compare this with that computed over all randomly chosen locations in the training videos and tabulate the results below, \n\n| Method | $\\|\\|$ visual state change $\\|\\|$ $\\uparrow$ |\n|----------------------------- |:----: |\n| Random location | 2.73 |\n| Moment of Interaction (MoI) | **3.14** |\n\nFrom the above table, we can observe that the magnitude of visual state change around detected MoI is significantly larger than that around randomly picked locations. This validates that MoI is more effective in picking locations with relatively better visual state change. This, in turn, provides a richer signal to the model to learn better representations and provide stronger performance on downstream tasks. Even if the detected moments of interest are less accurate, biasing the sampling of training clips towards moments in time is more conducive for learning representations, i.e., when interactions or actions are happening.\n\nAdditionally, in our work, we indeed evaluate the utility of moments of interaction (MoI) through their impact on representation quality and performance on multiple downstream tasks. Particularly, comparing rows 5 and 7 of Tables 1 and 2 of the main paper demonstrates that sampling training clips based on our MoIs improve representation quality and transfer. We would also like to highlight that the proposed self-supervised objectives (Sec. 3.3 and 3.4) utilized to train the model are meaningful regardless of MoIs – comparing rows (2) and (5) in Tab. 1 of the main paper. \n\n\n",
" We thank all the reviewers for their valuable feedback and suggestions. In this paper, we introduce an audio-driven self-supervised method to learn representations of egocentric video of daily activities. Our approach uses audio in two novel ways: 1) to find moments of interaction (MoI) which we show are conducive to better representation learning and 2) as part of our audible state change loss (AStC) which encourages the model to develop state-aware representations.\nWe are glad the reviewers found that the proposed method is **novel and effective** (Reviewer wUoy), our evaluation **on two very relevant well-known ego centric benchmarks…makes the paper stronger** (Reviewer coBj) and that the **paper is clear in presentation and has provided an interesting view to self-supervised multi-modal representation learning in egocentric videos** (Reviewer WY3G). We summarize the clarification related to MoI and AStC below and further address reviewers’ individual feedback in detail by replying to their comment threads individually. \n\n**1) Audible State Change Loss (AStC)**: AStC is designed to make the model learn the temporal change in visual state by increasing the probability of associating the forward (correct) direction of change ($\\Delta v_t^{\\text{frwd}}$) with the audio (Equation 4) and reducing the probability of associating of the backward (incorrect) direction of change ($\\Delta v_t^{\\text{bkwd}}$) with the audio (Equation 5). We mention this definition in Lines 158-161. By directly working on the direction of visual state change ($\\Delta x$) rather than on the visual states ($x$), AStC allows the model to be more aware about the transitions in visual states over the course of any action or interaction in the environment.\n\n**2) Moment of Interaction**: Referring to our definition of MoI in the paper (Lines 53-54, Lines 135-137), we intend to convey that an MoI can be any form of action or interaction among the entities in the environment that leads to a perceptible visual state change. Thus, MoIs are not restricted to just human-object and object-object interactions and can be associated with any form of interaction within the environment. We hypothesize that whenever there is such a visual state change, there is a high probability for it to be accompanied with a distinct/characteristic audio pattern (Lines 53-54) leading to an MoI detection. Such a formulation of MoI enables our method to focus on the timestamps in the video that provide relatively richer audio-visual signals for the model to learn meaningful feature representations that understand both visual states and temporal change in visual states. Note that we purposefully intend to detect generic enough MoIs to accommodate the unconstrained and unlabeled nature of the training data, and prevent being restricted to a closed set of human interactions.\n\nWe now answer each reviewer on a separate thread. We encourage reviewers to reach out during the discussion phase, if you still have questions. We’ll be prompt in our responses.\n",
" In this paper authors introduce a self-supervised learning method from audio-visual data. Specifically, the method revolves around using moments of interaction to learning meaningful audio-visual relation. Authors split the learning in two losses: an standard audio-visual loss, and a loss around the audible change of state and its relation to the visual change of state. Authors evaluate their proposed method in EpicKitchens and Ego4D. \n **Strengths**:\n\n- S1. Audio-visual self-supervised learning is a powerful tool to learn representations. However, typical approaches do not exploit the semantic importance of the moments where to sample the audio. This work is a step towards using the audio content in a most meaningful way.\n\n- S2. The results show how the method improves over the baseline with standard audio-visual learning. \n\n- S3. Authors evaluate in two very relevant well-known ego centric benchmarks. I believe that is important as it makes the paper stronger.\n\n- S4. Ego-centric data is going to be becoming more important in the near future. With the progress of recording devices and embodied research, works along the lines of this paper will become more relevant.\n\n- S5. I specially like the idea of detecting state changes through audio. According to the paper the procedure is quite simple and the self-supervised method certainly benefits from seeing samples around that moment.\n\n**Weaknesses**\n\n- W1. Authors compare with a single baselines for audio-visual learning. I think other works such as MMV (Alayrac, NeuriPS 2020), Evolving Losses (Piergiovanni, CVPR 2020), Brave (Recasens, ICCV 2021), XDC (Alwassel, NeurIPS 2020) are very relevant in the community. I understand the topic is slightly different and authors cannot retrain with all the baselines, but I still believe that having a single baseline such as AVID is insufficient for a publication. \n\n- W2. The authors are missing citations for some of the papers mentioned in W1 (e.g. MMV, Evolving Losses). I think those works are important in the space of self-supervised audio-visual learning.\n\n- W3. I think authors do not properly evaluate the ability of their model to detect moments of interaction. The description of the method is very complete and in Table 1 they ablate using the method, but I think it would be good to somehow evaluate whether the proposed method works.\n\n- W4. I am missing some examples of moments of interaction I understand the reasoning behind the methodology of using the reverse clip as negative, but I believe the readers would benefit from a few visual examples to understand that.\n\n My concerns are listed in Weaknesses. My main questions to the authors would be:\n\n- Have the authors considered adding additional baselines when comparing to the state-of-the-art? If not, why not?\n\n- Which evidence do the authors have that the detection of change works properly if they don't have any evaluation for that signal.\n\n- In general, could the authors show some more examples of moments of interaction (similar to the ones in supplementary) in the main paper. I think it would be specially interesting to see examples where the detection of those moments works and other where it fails. Authors discuss the limitations of the method in 4.2. I would suggest expanding that a bit with general negative societal impact that working with ego video can have. I believe egocentric video can be used to track everyday's life which could have some negative impact. ",
" This paper proposes leveraging audible state changes for self-supervised representation learning from long-form egocentric videos. Authors initially detect the timestamps where interesting interactions occur (MoI) and then compute a cross modal contrastive objective where the natural transition (before, interaction, after) is more likely than one which is temporally reverse (after, interaction, before). Paper is clear in presentation and has provided an interesting view to self-supervised multi-modal representation learning in egocentric videos with audible state change. The idea is pretty close to \"Actions ∼ Transformations\" <https://arxiv.org/pdf/1512.00795.pdf> which unfortunately is missing from the references. Below are my detailed comments.\n\n- Line 48-49: Authors argue that AVC objective leads to representations that are not informative of the changes over time. If indeed the sound of “opening fridge” is different from that of “closing fridge”, AVC will encourage video representation of “opening fridge” to be more similar to the audio representation of “opening fridge” than “closing fridge” since audio embedding of the latter will be served as a negative instance within the contrastive framework. Note that, you do not have to sample after the fact e.g once the fridge is closed/opened, instead you can sample during the action e.g while fridge is being opened/closed, something that working with video in a cross-modal contrastive setup easily allows you to do. With that said, I would like to see authors clarifying their point on the lack of AVC’s suitability for the task specially given the ablation studies that show dropping AVC vs AStC is not that much different.\n\n- On finding MoI: As paper mentions, egocentric videos are naturally long-form, as they continuously capture daily activities. Hence, a person moving in an environment i.e change in location, even without any interaction can result in changes in audio spectrogram. For example, a person is cooking some food in the kitchen, then walks to the living room to pick up a book. The fact that there is a fan or stereo playing in the living room, will result in a change in audio (note that the sound of fan or stereo is not necessarily audible in the kitchen), however those are not MoI since there has been no interaction with either stereo or fan yet from Sec 3.2 it seems to me that the proposed MoI approach should pick those. In nutshell, change in audio is not only as a result of human-object interaction, it can be due to change of location or variant environment as well and I cannot see how the proposed MoI detection method can work in a realistic environment.\n\n- I am not convinced that learning from audible state change as described in Sec 3.3 is generic enough. For example, the visual state is different after hearing the sound of MoI for “opening fridge” , while before state shows a closed fridge, the after state depicts an open one. Also, due to “distinct” sound of “opening fridge” the backward transition should be less likely. Now consider the example of “cutting cucumber”, the backward transition almost never happens (humans usually don’t stitch cucumber slices together!), while it is reasonable to go from a closed fridge to an open one and vice versa. There are also cases which despite audible MoI, visuals look almost identical in before and after like “stirring a pot”. I suspect that more clear performance gains seen on Ego4D versus Epic-Kitchens is partly related to the state change properties which are more prominent in the former dataset. I would like to hear authors feedback on the different aforementioned types of interactions and why their proposed model should work in a self-supervised setup where we don’t know which type of these interactions are included in the training data.\n\n- Line 269: I do not think it is fair to compare rows 2 and 7 since row 2 has been only trained on AudioSet which is very different from evaluation egocentric datasets. To see the true additional value of your contributions (use of MoI and AStC loss), AVID should be fine-tuned on the egocentric datasets (equally RepLAI w/o AStC and MoI) - How do you handle the audio diversity of semantically similar interactions? For example, “cutting cucumber” on a wooden board will sound different from doing it on a plastic board, or sound of “sautéing” mushrooms/onions in a pan is meaningfully influenced by the oven/pan/oil temperature.\n\n- As part of ablation studies, have you tried a linear head for $h_{\\Delta V}^{AStC}$ since forward delta seems to be the negative of the backward one.\n\n- AVC loss shown in Figure 3.b encourages the embedding of $V_{t-\\delta}$ to be close to $V_{t+\\delta}$ by both anchoring on audio embedding at t while the AStC encourages them to be different. These two objective functions, to the best of my understanding are pulling in opposite directions! Would not it make more sense to choose the audio at $t-\\delta$ and $t+\\delta$, instead of t, when computing two AVC losses?\n\n- In Table 1, Top 5 Acc: AVC seems to be doing most of the job while w/o AStC and MoI, performance on “verb” is almost maintained (~73%) but the pattern is different for “noun”. Any insights?\n Authors have addressed the limitations",
" This paper proposes a new way for self-supervised learning with egocentric videos to learn from audible interactions. Specifically, it uses audio signals to identify the state changes and use these transition states to learn the audio-visual correlations. The proposed method performs much better than a recent method AVID [42] and the ablation study shows the effectiveness of each model component. Strengths\n\n- The idea of using audio to identify the state changes for learning audio-visual correlation is novel and seems effective. \n- The paper is well written and easy to read\n- The performance of this method is good. \n\nWeaknesses\n\n- The example used for illustrating Eq. 1 are not convincing. The given example is the audio of closing the door should be (1) similar to the visual transition opened door → closed door and (2) dissimilar to the (backwards) transition closed → open. This activity has strong temporal order. But how to deal with activities like stirring and washing? \n- There is lack of the fully-supervised performance for comparison in experiments. My main question is the generalization ability of the proposed method, i.e., increasing the probability of associating the audio with the visual state change in the forward direction but decreasing the probability in the backward direction. It can work well for activities like opening/closing the door intuitively. However, how can it work for others like stirring and washing? The limitations and potential negative impact are discussed in the supplementary. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"IF0sazx3LWg",
"QJ7JhZAfTRP",
"hn18lMTENm",
"O6KIDaZsuRb",
"faC_Uiks94G",
"znwbZoJKoWr",
"nips_2022_AhbTKBlM7X",
"YIp7RRNzNuTW",
"O6KIDaZsuRb",
"fPRbj-_oF2r",
"CVgnH9x-MSv",
"yt1PbmmLAWx",
"faC_Uiks94G",
"_cGP3J-mMxi",
"znwbZoJKoWr",
"nips_2022_AhbTKBlM7X",
"nips_2022_AhbTKBlM7X",
"nips_2022_AhbTKBlM7X",
"nips_2022_AhbTKBlM7X"
] |
nips_2022_agNTJU1QNw | Geometric Order Learning for Rank Estimation | A novel approach to rank estimation, called geometric order learning (GOL), is proposed in this paper. First, we construct an embedding space, in which the direction and distance between objects represent order and metric relations between their ranks, by enforcing two geometric constraints: the order constraint compels objects to be sorted according to their ranks, while the metric constraint makes the distance between objects reflect their rank difference. Then, we perform the simple $k$ nearest neighbor ($k$-NN) search in the embedding space to estimate the rank of a test object. Moreover, to assess the quality of embedding spaces for rank estimation, we propose a metric called discriminative ratio for ranking (DRR). Extensive experiments on facial age estimation, historical color image (HCI) classification, and aesthetic score regression demonstrate that GOL constructs effective embedding spaces and thus yields excellent rank estimation performances. | Accept | This paper proposes a new approach named geometric order learning (GOL) for rank estimation. Reviewers found that the idea is novel and the paper is well written. The authors have also clearly addressed most questions from reviewers in their responses. Thus, I recommend the acceptance of this paper. | train | [
"HRU5DR_BdcZ",
"JSUasE3CrRM",
"c9QKOVAl_zD",
"tqRlxOeeu-x",
"EDFS14vald7",
"s4cLQWDL0eN",
"ks-RMcveOBe",
"qko_CUZV8Z2",
"1OYc7F3esg2",
"Db3X9rhM7_a",
"n769zbUCOJ_",
"pCWKa3P8qh9",
"99DWMlgx03q",
"3fcvCxF6E9_"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for your constructive and insightful review on our paper. We do appreciate it. ",
" Thanks for the clarifications. I don't have any other questions. ",
" Thank you for your feedback. We appreciate it greatly. \n***\n**Asymmetric $v_\\mathrm{f}$ and $v_\\mathrm{b}$:** \n\n> What we meant by 'stable results' is that the loss converged faster and the performance was slightly improved. \n\n> Please note that, in the order constraint in Eq. (5), the forward direction serves as a positive guide with which $v(h_x, h_y)$ should be aligned, whereas the backward direction does as a negative guide from which $v(h_x, h_y)$ should be pushed away.\n\n> As for the forward direction, we haven't considered modeling it as $v(r_{\\theta(x)}, r_{\\theta(x)+1})$, instead of $v(r_{\\theta(x)}, r_{\\theta(y)})$ in Eq. (3), because the latter fits the goal of the order constraint in Eq. (5) directly. Furthermore, when using Eq. (3), both reference points, $r_{\\theta(y)}$ as well as $r_{\\theta(x)}$, can be learned by minimizing $L_\\mathrm{order}$. However, as you suggested, we will do experiments with the alternative choice $v_\\mathrm{f}=v(r_{\\theta(x)}, r_{\\theta(x)+1})$ as well. \n\n\\begin{array}{rlcrl}\n & & \\text{O$(r_{\\theta(x)})$} & & \\\\\\ \n & \\diagup & & \\diagdown & \\\\\\ \n\\text{B$(r_{\\theta(x)-1})$} & & & & \\text{C$(r_{\\theta(x)+1})$} \\\\\\ \n | & & & & | \\\\\\ \n | & & & & | \\\\\\ \n\\text{A$(r_{2\\theta(x)-\\theta(y)})$} & & & & \\text{D$(r_{\\theta(y)})$}\\\\\\ \n & \\diagdown& & \\diagup & \\\\\\\\\n & & \\bullet & & \\\\\\\n\\end{array}\n\n> As for the backward direction, our initial experiments indicated that Eq. (4) would be a more reliable negative guide than the symmetric choice $v(r_{\\theta(x)}, r_{2\\theta(x)-\\theta(y)})$. This is illustrated intuitively in the figure above, in which reference points are well arranged in an embedding hypersphere. In this case, the direction OA is more similar to the forward direction OD than the direction OB is, so OA could be less effective as a negative guide. Also, when reference points are not as neatly arranged in an early phase of training, the direction OB of a single link would be more reliable than the direction OA determined by a chain composed of multiple links $(r_{\\theta(x)}, r_{\\theta(x)-1}), (r_{\\theta(x)-1}, r_{\\theta(x)-2}), \\\\cdots, (r_{2\\theta(x)-\\theta(y)+1}, r_{2\\theta(x)-\\theta(y)})$. This is because the chain is not flattened enough in the early phase, especially when the difference between $\\theta(x)$ and $\\theta(y)$ is large (i.e. when there are many links between O and A). Please refer to Figure 4 on P9 and our supplemental video, which visualize embedding spaces during training. \n\n> We will investigate various combinations of forward and backward directions, including all the aforementioned alternatives, and provide the results in the revised manuscript. \n\n**Comparison:**\n> Please note that, when the OL references are used, the performance of the proposed GOL is degraded due to the reduced number of references (only 275 references). It is not because the OL references are poorer than randomly selected samples for the $k$-NN search. In fact, GOL yields the same MAE of 2.184 also with randomly selected 275 samples.\n\n> To be clear, we provided the table above to show that the performance of OL is not improved meaningfully even though more references are employed. Hence, the proposed algorithm does not take unfair advantage of more samples for the $k$-NN search. Also, the table shows that GOL consistently outperforms OL, when the same number of samples (or references) are used. \n***\nIf you have any additional concerns, please let us know. We will do our best to resolve them.\n\nThank you again for your feedback.",
" Thanks for your response. The updated text and details provided further improve the clarity of the paper. \n\n**Asymmetric $v_f, v_b$**: I figured the asymmetry was due to things not working well - performance-wise. The comment was to get an idea of why this would be the case. I'm not sure what you mean by stable results in your response. The reliability argument I think can be made for the forward direction as well, so it's not something specific to the backward direction. \n\n**Test time**: This gives an idea of the differences and the gain in test time. \n\n**Comparison**: I'm a little confused by the table result vs text. Is it the case that the performance is subpar with references but does not degrade with the randomly selected subset of the training data (assuming that this is what is reported in the table)?",
" The paper proposes a solution for rank estimation and evaluates it on facial images. The proposed algorithm may inherit biases if the training dataset is biased, which may cause discriminatory effects to certain communities The authors address the fact that the algorithm can lead to biased outcomes and they propose that the bias should be resolved before any practical usage It is possible to address the ethical concerns in the current version.\nPlease specify/cite a few relevant bias removing mechanisms.",
" Thank you for your positive review and constructive comments. Please find our responses below.\n***\n* **Repetitive parts:** We have revised the related work section to remove the repetitive parts. Please see L84-86 on P3.\n\n* **Order and metric properties:** We agree that they are common knowledge. We have removed those properties and shortened the descriptions of order and metric. Please see L112-115 on P3. \n\n* **Asymmetric $v_\\mathrm{f}$ and $v_\\mathrm{b}$:** We may define $v_\\mathrm{f}$ and $v_\\mathrm{b}$ symmetrically, as you commented. Then, for $x \\prec y$, the backward rank direction would be $v_\\mathrm{b} = v(r_{\\theta(x)}, r_{\\theta(x)- (\\theta(y)-\\theta(x))}) = v(r_{\\theta(x)}, r_{2\\theta(x)- \\theta(y)})$, instead of Eq. (4). During the development of GOL, we tested this design choice as well, but Eq. (4) yielded more stable results. This is because $v(r_{\\theta(x)}, r_{2\\theta(x)- \\theta(y)})$ tends to be less reliable, especially in early training phases before convergence, than $v(r_{\\theta(x)},r_{\\theta(x)-1})$ between adjacent ranks is when the rank difference between $x$ and $y$ is large. We are doing more thorough experiments to compare alternative choices for the forward and backward rank directions and will include the results in the camera-ready.\n\n* **Gain in testing time:** The conventional order learning algorithms compare a test instance with references using a comparator or a regressor, which consists of multiple layers demanding sequential computations. Moreover, OL and DRC-ORID need additional processes, such as MAP estimation, to estimate a rank based on ordering relationships, and MWR-G adopts iterative estimation. In contrast, the proposed algorithm performs the simple $k$-NN search for rank estimation. Note that the distances to all samples can be computed efficiently in a parallel manner. Thus, as listed below, the proposed algorithm performs fast even with 44,000 samples for the $k$-NN search. Please see L296-300 on P9 and L640-643 on P22.\n\\begin{array}{l|c|c|c|c}\n\\hline & \\text{MORPH II (A)} & \\text{MORPH II (B)} & \\text{MORPH II (C)} & \\text{CACD (Val)} \\\\\\ \n\\hline \n\\text{\\\\# samples} & 4,394 & 7,000 & 44,000 & 7,600 \\\\\\ \n\\hline \n\\text{Testing time (ms)} & 0.05 & 0.06 & 0.08 & 0.06 \\\\\\ \n\\hline\n\\end{array}\n\n* **Comparison with reference-based OL methods:** As you pointed out, the proposed GOL uses all instances in a training set for the $k$-NN search. Please note that MWR-G also uses the entire training set, since it also performs the $k$-NN search to obtain an initial estimate. On the other hand, OL and DRC-ORID use a small number of references. However, those references should be selected from the training set via a complicated scheme to exclude unreliable samples and boost the performances. \nAs you suggested, GOL may estimate a rank by employing only ‘learned’ reference points (which are different from references in OL and DRC-ORID, ‘selected’ from training sets), but this scheme would yield poor results. It is because our reference points are learned to guide the region of each rank in the embedding space during training, rather than to be used for the $k$-NN search in testing. \nTable below compares the rank estimation performances of the proposed algorithm and OL on setting A of the MORPH II dataset. Even using the entire training dataset $\\\\cal X$ as references, the performance of OL is not improved meaningfully. On the other hand, with a smaller number of samples for the $k$-NN search, GOL degrades only slightly. In every case, GOL outperforms OL. Thus, we believe that the comparison with the reference-based OL methods is fair, for the training set is a resource fairly available for all algorithms.\n\\begin{array}{l|c|c|c|c}\n\\hline \\text{\\\\# references or samples} & 275\\~\\text{(references in OL)} & \\hspace{0.6cm}440\\~\\text{(10\\\\% of $\\\\cal X$)}\\hspace{0.6cm} & \\hspace{0.5cm}2197\\~\\text{(50\\\\% of $\\\\cal X$)}\\hspace{0.5cm} & \\hspace{0.6cm}4394\\~\\text{($\\\\cal X$)}\\hspace{0.6cm} \\\\\\ \n\\hline \n\\text{OL} & 2.412 & 2.414 & 2.412 & 2.410 \\\\\\ \n\\hline \n\\text{Proposed GOL} & 2.184 & 2.178 & 2.174 & 2.173 \\\\\\ \n\\hline\n\\end{array}",
" * **Training time:** We train the encoder until the loss converges. Table below lists the training epochs and time for each dataset. We use an NVIDIA GeForce RTX 3090 GPU in the experiments. These training details have been included in the revision. Please see L541-543 on P16. Also, we will share the training codes.\n\\begin{array}{l|c|c|c|c|c|c}\n\\hline \\text{Dataset}& \\text{MORPH II (A)} & \\text{\\hspace{0.1cm}CACD (Train)\\hspace{0.1cm}} & \\text{\\hspace{0.3cm}CACD (Val)\\hspace{0.3cm}} & \\text{\\hspace{0.8cm}UTK\\hspace{0.8cm}} & \\text{\\hspace{0.8cm}HCI\\hspace{0.8cm}} & \\text{\\hspace{0.3cm}Aesthetics\\hspace{0.3cm}} \\\\\\ \n\\hline \n\\text{\\\\# epochs} & 250 & 10 & 30 & 50 & 150 & 50 \\\\\\ \n\\hline \n\\text{Training Time (hrs)} & 5& 7 & 1 & 3 & 1 & 3 \\\\\\ \n\\hline \n\\end{array}\n\n* **VGG16 backbone:** We employ VGG16 as the encoder for a fair comparison with existing algorithms. In the main paper, we compare the proposed algorithm with 19 different algorithms in total. Among them, 14 use VGG16, except for CORAL (ResNet34), Gustafsson et al. (ResNet50), Berg et al. (ResNet50), C3AE (shallow CNN), and OR-CNN (shallow CNN). In particular, except for C3AE, all age estimators in Table 2 use VGG16. This has been clarified. Please see L254-255 on P7. \n\n* **More limitations:** Because the proposed algorithm predicts a rank by the $k$-NN search, it suffers where there are insufficient training instances. For example, GOL yields relatively poor results on MORPH II setting B, which consists of 7,000 training samples and 14,000 test samples. Also, most age estimation datasets contain fewer toddler and elder instances. Thus, GOL yields less accurate estimates on such minority classes. These limitations of GOL have been discussed in the revision. Please see L648-653 on P22.\n\n* **Rationale for ordering ranks in an embedding space:** It is almost impossible to design the ideal encoder, perfectly separating each rank in an embedding space, for there is no clear distinction between ranks in practice. Thus, an instance may be erroneously mapped to the region of a wrong rank. Unlike ordinary classification, different errors have different severities in rank estimation: mistaking a young adult as a toddler is severer than mistaking the young adult as a teenager. Ordering in the embedding space can alleviate such severe errors. This rationale has been clarified. Please see L45-48 on P2. \n\n* **Multiple repetitions of experiments:** Below are five repetitive rank estimation results on various datasets. In this test, the networks are trained from scratch five times. Note that the deviations are negligible, and even the worst repetition outperforms the conventional methods on each dataset. We are doing the repetitions for all experiments and will include the results in the camera-ready.\n\\begin{array}{l|c|c|c|c|c}\n\\hline \\text{Dataset\\hspace{0.3cm}}& \\text{MORPH II (A) (Fold 0)} & \\text{\\hspace{1.0cm}CACD (Val)\\hspace{1.0cm}} & \\text{\\hspace{1.7cm}UTK\\hspace{1.7cm}} & \\text{\\hspace{1.0cm}HCI (Fold 0)\\hspace{1.0cm}} & \\text{\\hspace{0.5cm}Aesthetics (Fold 0)\\hspace{0.5cm}} \\\\\\ \n\\hline \n\\text{MAE} & 2.18±0.023 & 5.61±0.019 & 4.361±0.028 & 0.543±0.021 & 0.288±0.005 \\\\\\ \n\\hline \n\\end{array}\n\n***\nWe have revised our paper to address your comments faithfully and hope that this revision resolve your concerns. If you have any additional concerns, please let us know.\n\nThank you again for your constructive comments. We do appreciate them.",
" Thank you for your positive review and valuable comments. Please find our responses below.\n***\n* **Selection of reference points:** Please note that we do not select reference points from object instances in a training set. They are learnable parameters to guide the region of each rank in an embedding space, and they are jointly optimized with encoder parameters during training. This has been more clearly described in the revised manuscript. Please see L148-152 on P4. \n\n* **Different network architectures:** As you pointed out, the algorithms in Table 1 have different network structures. However, to compare their embedding spaces as fairly as possible, we use the same encoder backbone of VGG16 for all the algorithms. We have clarified this in the revision. Please see L230-231 on P7.\n***\nEvery attempt has been made to address your comments faithfully in the revised paper. If you have any additional comments, please let us know.\n\nThank you again for your positive and constructive comments. We do appreciate them.\n",
" Thank you for your positive review and insightful suggestions. Please find our responses below.\n***\n* **Relation to traditional features:** As you suggested, we have cited and discussed the traditional ranking features and methods (SIFT-Rank and Toews et al. in MICCAI 2012). Please see L81-83 on P3. \n\n* **\"Band\" around a hypersphere:** Yes, in Figure 1 (c), objects are arranged in a band on the embedding hypersphere.\n\n* **Experiments on more datasets:** We will include experimental results on more datasets, such as EyePacs dataset [1] and AADB dataset [2], in the camera-ready.\n\n [1] Kaggle. 2015. Diabetic Retinopathy Detection. https://www.kaggle.com/c/diabetic-retinopathy-detection/overview\n\n [2] Shu Kong, Xiaohui Shen, Zhe Lin, Radomir Mech, and Charless Fowlkes. Photo aesthetics ranking network with attributes and content adaptation. In ECCV, 2016\n***\nWe have made every attempt to address your comments in the revised manuscript and hope that you find this revision satisfactory. If you have additional concerns, please let us know.\n\nThank you again for your positive comments. We do appreciate them.",
" Although face age estimation is one of the applications of the paper's proposed methodology, it is not the only use of the approach, and isn't exactly tied to that application. Moreover, although face attribute estimation can be used as an instrument of surveillance and oppression, age and other ordinal attributes are not usually the concerning ones: unordered categorical attributes tend to be the ones most problematic. Yes, the authors have clearly laid out the ethical concerns and recommend that the proposed algorithm only be used for research purposes and be paired with a bias mitigation strategy. I think the paper can continue to be judged on its technical merits without having an ethical concern.",
" We would like to thank all reviewers for their time and positive reviews. We would also extend our thanks to the area chairs. \nWe are carefully preparing our responses to all suggested comments, and we will upload our response to each question/comment as soon as possible.\n",
" A GOL embedding space represents the direction and distance between objects represent order and metric relations between their ranks, by enforcing two geometric constraints 1) the order/rank constraint
and 2) metric constraint reflects rank difference. \n\nEstimates a test object rank are achieved by kNN, and a metric called discriminative ratio for ranking (DRR) estimates the quality of rank estimation embedding spaces.\n\nExperiments on facial age estimation, historical color image (HCI) classification, and aesthetic score regression demonstrate that GOL constructs effective embedding spaces and yields rank estimation performances. It seems GOL is the first attempt to design an embedding space in which the direction and distance between objects represent their order and metric relations. The GOL algorithm performs best in 80% of benchmark datasets for facial age estimation, HCI classification, and aesthetic score regression.\n\nIt is not clear how here, the rank and DRR metric are related to traditional (mathematical) methods defining according rank and order geometry, without learning. For example, ranked gradient orientation features are used for head age estimation, classification, with much success, prior to this work.\n\nE.g. please see SIFT-Rank, SIFT being the \"scale-invariant feature transform\" (the state-of-the-art before GPU-based deep CNN learning), and Rank being the order statistics of the SIFT descriptor (equivalent to uniform orientation sampled gradient filtering).\nhttps://ieeexplore.ieee.org/document/5206849\n\nToews, M., Wells, W.M. and Zöllei, L., 2012, October. A feature-based developmental model of the infant brain in structural MRI. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 204-211). Springer, Berlin, Heidelberg.\n\n Would be interesting to know the link to traditional ranking theories such as SIFT-Rank.\n\nIn Figure 1 c), are we seeing ordered objects arranged in a \"band\" around a hypersphere?\n There are no obvious limitations, except demonstration on more datasets & tasks.",
" This paper introduces geometric order learning (GOL) method for rank estimation by enforcing two geometric constraints: the order constraint and the metric constraint. The order constraint enforces the feature vectors of instances to be arranged according to their ranks, and the metric constraint makes the distance between instances reflect their rank difference. The paper also proposes discriminative ratio ranking metric to assess the quality of embedding spaces for rank estimation. Extensive experiments demonstrate that GOL constructs effective embedding spaces and yields excellent rank estimation performances. Strengths:\n+ A geometric order learning (GOL) method for rank estimation is proposed by enforcing the order constraint and the metric constraint.\n+ A discriminative ratio ranking metric is introduced to assess the quality of embedding spaces for rank estimation.\n+ Experiments are conducted to demonstrate that GOL constructs effective embedding spaces and yields excellent rank estimation performances.\n\nWeaknesses:\n- The proposed GOP method rely on reference points, it is not clear how to select these reference points and how they affect the performance of GOP.\n- As the network architectures of different methods in Table 1 are different, so whether the comparison is fair? - The proposed GOP method rely on reference points, it is not clear how to select these reference points and how they affect the performance of GOP.\n- As the network architectures of different methods in Table 1 are different, so whether the comparison is fair? Yes",
" - The paper introduces an algorithm for learning an embedding space using which the rank of an object can be estimated, e.g. age of a person based on face images. \n- The embedding space is constrained such that the ordering and distance between training instances are preserved. \n- The authors introduce a metric for embedding space evaluation by repurposing the B2W metric (inter-class variance/intra-class variance) to account for the rank ordering of the data embeddings. - The paper is easy to read. However, parts of the introduction and related work section are repetitive. \n- The order and distance properties listed as preliminaries are common knowledge and seem to add no specific detail to the objective formulation in the paper.\n- The central idea of the paper is to combine concepts from order learning and metric learning. The paper presents an objective by combining the two constraints. \n- The visualization presented in the paper shows that the proposed method learns embeddings that are ordered.\n- Ablation study shows that each loss objective added helps improve the performance of the model. \n - The order constraint formulation uses $v_b$ which is based on consecutive ranks while the forward uses difference in references corresponding to $x$ and $y$ ranks - Is there a reason for this asymmetry?\n- Where does the gain in test runtime come from? I would assume comparing with 5 references/rank would be faster than searching for the k-nearest neighbor from $N$ training examples.\n- Also, comparing methods where one uses a compressed representation in the form of references while the other uses the entire training set is a fair setting. The proposed method does learn references during training but seems to ignore these for inference - can the rank estimation be performed with just the references?\n\nMinor: \n- How long is the model trained for? Is there a stopping condition?\n- Is there any particular reason for choosing the VGG backbone?\n - The authors show specific failure cases in facial age estimation where the lighting condition, overexposure, and other image perturbations affect the performance of the model - I think this is a general limitation of the dataset and is not specific to the model. A more careful analysis of the proposed method and its limitations might be useful.\n- The paper lacks a discussion on why the embedding space has to be ordered - This is crucial for the premise of the method proposed. A method that embeds each rank in distinct spaces will be good at rank estimation (I believe this would explain the performance of other methods compared in the paper).\n- The reported results seem to be based on a single run - It would be useful to report results based on multiple random initializations."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"JSUasE3CrRM",
"c9QKOVAl_zD",
"tqRlxOeeu-x",
"s4cLQWDL0eN",
"nips_2022_agNTJU1QNw",
"3fcvCxF6E9_",
"3fcvCxF6E9_",
"99DWMlgx03q",
"pCWKa3P8qh9",
"nips_2022_agNTJU1QNw",
"nips_2022_agNTJU1QNw",
"nips_2022_agNTJU1QNw",
"nips_2022_agNTJU1QNw",
"nips_2022_agNTJU1QNw"
] |
nips_2022_pBpwRkEIjR3 | Enhanced Bilevel Optimization via Bregman Distance | Bilevel optimization has been recently used in many machine learning problems such as hyperparameter optimization, policy optimization, and meta learning. Although many bilevel optimization methods have been proposed, they still suffer from the high computational complexities and do not consider the more general bilevel problems with nonsmooth regularization. In the paper, thus, we propose a class of enhanced bilevel optimization methods with using Bregman distance to solve bilevel optimization problems, where the outer subproblem is nonconvex and possibly nonsmooth, and the inner subproblem is strongly convex. Specifically, we propose a bilevel optimization method based on Bregman distance (BiO-BreD) to solve deterministic bilevel problems, which achieves a lower computational complexity than the best known results. Meanwhile, we also propose a stochastic bilevel optimization method (SBiO-BreD) to solve stochastic bilevel problems based on stochastic approximated gradients and Bregman distance. Moreover, we further propose an accelerated version of SBiO-BreD method (ASBiO-BreD) using the variance-reduced technique, which can achieve a lower computational complexity than the best known computational complexities with respect to condition number $\kappa$ and target accuracy $\epsilon$ for finding an $\epsilon$-stationary point. We conduct data hyper-cleaning task and hyper-representation learning task to demonstrate that our new algorithms outperform related bilevel optimization approaches. | Accept | The paper studies bilevel optimization problems, provides three algorithms for different settings, and improves the convergence analysis in terms of the condition number. In addition, numerical experiments are conducted that provide illustration of the effectiveness of the algorithms. Three reviewers all agree that the paper should be published as it contributes to the literature and will be of interest to the NeurIPS audience.
When preparing the final version of the manuscript, please incorporate the discussion that addressed the reviewers' comments either in the main text or the appendix. | train | [
"iIODO0Y2XsU",
"p8Rphr5lIxH",
"M3gwXQ53dc",
"6fSohfQfZRG",
"9p0W9d6mWbI",
"jhLtNORQNhsG",
"eOgVesah4x3",
"NfOFi13e2Ws",
"smplBqKfVVX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. You resolve my questions, but I want to remain my score.",
" Thanks for your response. Questions have been solved.",
" I really appreciate the author's response. All my questions are answered.",
" Thanks so much for your comments.\n\n**Q1**: Weaknesses: 1) This paper only mentions one circumstance…\n\n**R1**: Since the existing gradient-based bilevel algorithms do not focus on solving the bilevel optimization problems with nonsmooth regularization, they only use the sub-gradient descent to solve these nonsmooth bilevel problems. However, our algorithms use more efficient Bregman-distance-based proximal gradient decent to solve them. Please see the additional nonsmooth experimental results: Specifically, we consider the $L_1$ regularized hyper-representation learning problem, i.e. to learn a sparse hyper-representation. We compare our AsBiO-BreD (Algorithm 3) with various baselines. The test accuracy results for 5-way-1-shot (Table 1) and 5-way-5-shot (Table 2) over the Omniglot dataset are summarized in the Table below:\n\n**Table 1**: The 5-way-1-shot case\n| Time | 20s | 40s | 60s |\n| ------------- | ------------- | ------------- | ------------- |\n| AID_BiO | 0.6509 | 0.7365 | 0.7762 |\n| ITD_BiO | 0.6411 | 0.7210 | 0.7721 |\n| MRBO | 0.6103 | 0.6971 | 0.7519 |\n| FSLA | 0.6539 | 0.7399 | 0.7661 |\n| VRBO | 0.5951| 0.6805| 0.7429 |\n| VR-saBiAdam | 0.6812 | 0.7141 | 0.7523 |\n|AsBiO-BreD| 0.6653 | 0.7403 | 0.7830 |\n\n**Table 2**: The 5-way-5-shot case\n| Time | 20s | 40s | 60s |\n| ------------- | ------------- | ------------- | ------------- |\n| AID_BiO | 0.8316 | 0.8779 | 0.9032 |\n| ITD_BiO | 0.8131 | 0.8621 | 0.8968 |\n| MRBO | 0.8174 | 0.8634 | 0.8819 |\n| FSLA | 0.7993 | 0.8485 | 0.8824 |\n| VRBO | 0.7730 | 0.8305 | 0.8745 |\n| VR-saBiAdam | 0.7753 | 0.8188 | 0.8640 |\n|AsBiO-BreD| 0.8529 | 0.8967 | 0.9313 |\n\n\nIn the final version of our manuscript, we will mention more examples/applications of using non-smooth objectives. For example, Neural Network Architecture Search (NNAS) can be represented as a nonconvex bilevel problem with nonsmooth regularization such as group-Lasso or other structured-Lasso. \n\n**Q2**: It is not very clear how to compute the partial derivative…\n\n**R2**: Thanks for your suggestion. We have detailed the partial derivative $w_t$ in Lemma 5 at the supplementary material. In the final version of our manuscript, we will move this detailed partial derivative $w_t$ to the main body.\n\n**Q3**: In Algorithm 2, the usage of $\\eta_t$ may need more explanations…\n\n**R3**: Thanks for your suggestion. In our stochastic algorithms (SBiO-BredD and ASBiO-BredD), we use two learning rates to update the variable $y$. Under this case, we easily provide the error of updated variable $y$ given in Lemma 10 at the supplementary material. In practice, we can more flexibly choose the learning rate for updating variable $y$.\n\n**Q4**: It is nice that in both the theoretical and experiment parts, …then the major difference is the usage of Bregman distance. Does it account for the improvement?...\n\n**R4**: Yes, you are right. In our algorithms, the Bregman distances improve the performances by fitting the geometry of optimization problems based on the proper Bregman functions. In the convergence analysis, the proper strongly-convex Bregman functions used in Bregman distances can reduce the sample complexity of the proposed algorithms.\n",
" Thanks so much for your positive comments.\n\n**R1**: Thanks for your suggestion. In the final version of our manuscript, we will detail the explanations of lemmas. For example, Lemma 2 shows the smoothness of function $F(x)=f(x,y^*(x))$ and Lipschitz continuous of mapping $y^*(x)$. Lemma 4 shows the Lipschitz continuous of the estimated gradient estimator $ \\bar{\\nabla}f(x,y;\\bar{\\xi})$ ( or $\\bar{\\nabla}f(x_t,y_t;\\bar{\\xi}_t^i$ ) defined in sub-Section 4.2. \n\n**R2**: Thanks for your suggestion. In our algorithms, our key innovation is to use Bregman-distance to update the variable $x$, and apply the strongly-convex Bregman function in the Bregman-distance to improve the sample complexity of our algorithms. In fact, our algorithms are a class of Bragman-distance-based algorithm framework for bilevel optimization, which do not rely on any specific gradient estimators and varaice-reduced techniques. Specifically, in our deterministic algorithm (BiO-BreD), we use the gradient estimator as in [20]. I our stochastic algorithms (SBiO-BredD, ASBiO-BreD), we use the gradient estimator as in [21]. In our ASBiO-BreD algorithm, we use the variance-reduced technique of SPIDER, while [21] uses the momentum-based variance reduced technique of STORM. Clearly, we can also use other gradient estimators and variance-reduced techniques such as STORM to our algorithms. \n\n[20] Bilevel optimization: Convergence analysis and enhanced design, ICML-21;\n\n[21] A near-optimal algorithm for stochastic bilevel optimization via double-momentum, NeurIPS-21.\n\nMoreover, we provide a convergence analysis framework for our algorithms based on a useful and unified potential function $ \\Omega_t = \\mathbb{E}[F(x_t) + h(x_t) + ||y_t - y^*(x_t)||^2] $.\n\n**R3**: Thanks for your suggestion. In the final version of our manuscript, we will provide a short ablation studies to compare current algorithms performances in different settings. For example, to different batch-sizes, since we use the standard stochastic gradient estimator in our SBiO-BredD algorithm and the variance-reduced technique of SPIDER in our ASBiO-BredD algorithm, they rely on the relatively large batch-size, e.g., in convergence analysis, we choose batch-size $b= 2\\kappa^2\\epsilon^{-1} $ in our SBiO-BredD algorithm. While the SUSTAIN algorithm in [21] and the MRBO algorithm [37] use the momentum-based variance-reduced technique of STORM, they do not rely on the large batch sizes. Note that in fact, we provide a class of Bregman-distance-based algorithm framework in our paper, so our stochastic algorithms also can use the momentum-based variance-reduced technique of STORM to reduce the large batch size. \n\n**R4**: Thanks for your suggestion. Here, we add solving the nonsmooth bilevel optimization problems. Specifically, we consider the $L_1$ regularized hyper-representation learning problem, i.e. to learn a sparse hyper-representation. We compare our AsBiO-BreD (Algorithm 3) with various baselines. The test accuracy results for 5-way-1-shot (Table 1) and 5-way-5-shot (Table 2) over the Omniglot dataset are summarized in the Table below:\n\n**Table 1**: The 5-way-1-shot case\n| Time | 20s | 40s | 60s |\n| ------------- | ------------- | ------------- | ------------- |\n| AID_BiO | 0.6509 | 0.7365 | 0.7762 |\n| ITD_BiO | 0.6411 | 0.7210 | 0.7721 |\n| MRBO | 0.6103 | 0.6971 | 0.7519 |\n| FSLA | 0.6539 | 0.7399 | 0.7661 |\n| VRBO | 0.5951| 0.6805| 0.7429 |\n| VR-saBiAdam | 0.6812 | 0.7141 | 0.7523 |\n|AsBiO-BreD| 0.6653 | 0.7403 | 0.7830 |\n\n**Table 2**: The 5-way-5-shot case\n| Time | 20s | 40s | 60s |\n| ------------- | ------------- | ------------- | ------------- |\n| AID_BiO | 0.8316 | 0.8779 | 0.9032 |\n| ITD_BiO | 0.8131 | 0.8621 | 0.8968 |\n| MRBO | 0.8174 | 0.8634 | 0.8819 |\n| FSLA | 0.7993 | 0.8485 | 0.8824 |\n| VRBO | 0.7730 | 0.8305 | 0.8745 |\n| VR-saBiAdam | 0.7753 | 0.8188 | 0.8640 |\n|AsBiO-BreD| 0.8529 | 0.8967 | 0.9313 |\n\n",
" Thanks so much for your positive comments.\n\n**Q1**: On the algorithmic design side,...\n\n**R1**: Thanks for your comment. Our algorithms are a class of Bregman-distance-based algorithm framework for bilevel optimization, which do not rely any gradient estimators. Specifically, in our deterministic algorithm (BiO-BreD), we use the gradient estimator as in [20]. I our stochastic algorithms (SBiO-BredD, ASBiO-BreD), we use the gradient estimator as in [21]. Clearly, we can also use other gradient estimators and variance-reduced techniques to our algorithms. \n\n[20] Bilevel optimization: Convergence analysis and enhanced design, ICML-21; \n\n[21] A near-optimal algorithm for stochastic bilevel optimization via double-momentum, NeurIPS-21.\n\n**Q2**: On the theoretical side,...\n\n**R2**: Thanks for your comment. We provide a useful convergence analysis framework for our Bregman-distance-based algorithms. In our convergence analysis, we establish a useful and unified potential function $ \\Omega_t = \\mathbb{E}[F(x_t) + h(x_t) + ||y_t - y^*(x_t)||^2] $ for our algorithms. \n\n**Q3**: The numerical verification of the problem is limited,...\n\n**R3**: Thanks for your comment. In fact, we have the other experimental results (test accuracy) about Hyper-representation Learning given in supplementary material (at pages 13-14). Here, we also add solving the nonsmooth bilevel optimization problems. Specifically, we consider the $L_1$ regularized hyper-representation learning problem, i.e. to learn a sparse hyper-representation. We compare our AsBiO-BreD (Algorithm 3) with various baselines. The test accuracy results for 5-way-1-shot (Table 1) and 5-way-5-shot (Table 2) over the Omniglot dataset are summarized in the Table below:\n\n**Table 1**: The 5-way-1-shot case\n| Time | 20s | 40s | 60s |\n| ------------- | ------------- | ------------- | ------------- |\n| AID_BiO | 0.6509 | 0.7365 | 0.7762 |\n| ITD_BiO | 0.6411 | 0.7210 | 0.7721 |\n| MRBO | 0.6103 | 0.6971 | 0.7519 |\n| FSLA | 0.6539 | 0.7399 | 0.7661 |\n| VRBO | 0.5951| 0.6805| 0.7429 |\n| VR-saBiAdam | 0.6812 | 0.7141 | 0.7523 |\n|AsBiO-BreD| 0.6653 | 0.7403 | 0.7830 |\n\n**Table 2**: The 5-way-5-shot case\n| Time | 20s | 40s | 60s |\n| ------------- | ------------- | ------------- | ------------- |\n| AID_BiO | 0.8316 | 0.8779 | 0.9032 |\n| ITD_BiO | 0.8131 | 0.8621 | 0.8968 |\n| MRBO | 0.8174 | 0.8634 | 0.8819 |\n| FSLA | 0.7993 | 0.8485 | 0.8824 |\n| VRBO | 0.7730 | 0.8305 | 0.8745 |\n| VR-saBiAdam | 0.7753 | 0.8188 | 0.8640 |\n|AsBiO-BreD| 0.8529 | 0.8967 | 0.9313 |\n\n",
" The paper studies the nonconvex outer-objective and strongly convex inner-objective bilevel optimization problem through the lens of Bregmen distance. The paper covers the deterministic optimizaton and stochastic optimization. in both situation, the authors provide the algorithm and its convergence analysis. The theoretical results shows the proposed algorithm with the aid of Bregman distance improve the performance with respect to the condition number $\\kappa$ and utilizing the variance reduction technique could further improve the dependency on $\\epsilon$ with a order of $\\frac{1}{2}$. The numerical experiment further prove the efficiency of the proposed algorithm. The strengths of the paper is obvious that it proposes the algorithms that could improve the theoretical upper-bound of previous state-of-the-art results, the presentation is clear, and the numerical experiment is sound.\n\nThe weaknesses mainly because the improvement is predictable. It is well-known that the variance-reduction technique could improve the oder w.r.t. the accuracy $\\epsilon$, and the Bregman disctance helps with the condition number $\\kappa$. Q1: On the algorithmic design side, is there any extra effort that we need to make rather than directly replace the exact gradient with the gradient estimations? If it need some specific design, I would like to increase my score.\n\nQ2: On the theoretical side, is there any extra effort other than incorporating the biases introduced by the gradient estimation? I would like to increase my score of rating if so. The numerical verifications of the problem is limited, for each algorithm, we only have one experiment. The experiment shows the proposed algortihms have lower losses. It would be better if the authors provide more experiments. I would like increase my score if the author could provide more experimental results during rebuttal.",
" This paper incorporate the Bregman distance into bilevel optimization and propose three methods (BiO-BreD, SBiO-BredD, ASBiO-BreD) which targets at addressing deterministic and stochastic bilevel problem. Such proposed algorithms have matched best target accuracy $\\epsilon$ and improved the condition number $\\kappa$ compared with other benchmarks. Meanwhile, such analysis is adaptable for nonsmooth outer function. The experiments also demonstrate the superior performance of proposed algorithms. In terms of strengths, the proposed work shows the condition number improvement in terms of convergence analysis. Meanwhile, in different experimental settings, the proposed algorithms have demonstrated its superior performance individually. Both assumptions and convergence analysis are standard and easy to follow.\n\nIn terms of weakness, several Lemmas (Lemma2, Lemma4) in main body lack explanations. The theoretical analysis is very standard while it will be better to point out the technical innovations. 1. It will be better to include explanations for different Lemmas which can help reader to know what is the meaning behind each lemma in main body.\n\n2. It will be better to discuss the technical innovations compared with previous work [21]. Note that [21] refers to different works in main body and supplementary.\n\n3. It will be better to include a short ablation studies to compare current algorithms performances in different settings. (e.g. different batch sizes). \n\n4. The algorithm is analyzed in non-smooth function space while the experiments are based on smooth function. It will be better to try non smooth experiment. Non negative societal impact. ",
" This paper proposes a new class of bilevel optimization (BO) problems both in deterministic and stochastic forms. Compared to the classic BO problem, their outer function has an additional nonsmooth term, which makes their model more general, e.g. it contains the case when we use $L_1$ regularization.\n\nThen three algorithms are proposed. The first two are used to solve deterministic/stochastic BO problems (in the new form) respectively. And the last one is an accelerated version of the second algorithm. Strengths: 1) this paper broadens the class of BO problems that appeared in previous literature which leads to the demand for new algorithms (because previous algorithms can not deal with non-smooth outer function); 2) under similar assumptions compared with related works, the algorithms proposed in this paper achieve the best convergence rate with known condition number\n\nWeaknesses: 1) This paper only mentions one circumstance where we need to consider a non-smooth outer function (when we use $L_1$ regularization). That may narrow the unique field of application of this work, i.e. this work can do but the previous works can not. It is good to mention more examples/applications of using non-smooth objectives. \n 1) It is not very clear how to compute the partial derivative $\\omega_t$ in Algorithm 1 line 8. Maybe more explanation is needed.\n\n2) In Algorithm 2, the usage of $\\eta_t$ may need more explanations. Why the step size is $\\lambda \\eta_t$ however in Algorithm 1 $\\lambda$ is used?\n\n3) It is nice that in both the theoretical and experiment parts, the performance of proposed algorithms is better than baseline algorithms. But where does this improvement come from? When compared with baseline algorithms, I guess $h(x) = 0$, then the major difference is the usage of Bregman distance. Does it account for the improvement? If the authors can find more explanations, this work may have more theoretical value. The efficiency of proposed algorithms depends on the choice of $\\psi$, which is used to define Bregman distance. In the experiment part, it is chosen such that the updating of $x$ is very easy, i.e. a closed-form solution exists when $h(x) = 0$. But when other $\\psi$ is chosen or $h(x) \\neq 0$, how to efficiently solve the problem to update $x$? The complexity of solving this subproblem seems does not appear in the comparison with other algorithms. Maybe the authors can make it more clear how to solve this subproblem for general $h(x)$."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"jhLtNORQNhsG",
"9p0W9d6mWbI",
"6fSohfQfZRG",
"smplBqKfVVX",
"NfOFi13e2Ws",
"eOgVesah4x3",
"nips_2022_pBpwRkEIjR3",
"nips_2022_pBpwRkEIjR3",
"nips_2022_pBpwRkEIjR3"
] |
nips_2022_Vg_02McCRnY | Optimal Comparator Adaptive Online Learning with Switching Cost | Practical online learning tasks are often naturally defined on unconstrained domains, where optimal algorithms for general convex losses are characterized by the notion of comparator adaptivity. In this paper, we design such algorithms in the presence of switching cost - the latter penalizes the typical optimism in adaptive algorithms, leading to a delicate design trade-off. Based on a novel dual space scaling strategy discovered by a continuous-time analysis, we propose a simple algorithm that improves the existing comparator adaptive regret bound [ZCP22a] to the optimal rate. The obtained benefits are further extended to the expert setting, and the practicality of the proposed algorithm is demonstrated through a sequential investment task. | Accept | This is a technical, but interesting paper on online linear optimization. The nice contribution is a control of the switching cost (moving from one action to another) which makes the problem highly non-trivial.
The contribution is to consider a "smaller" set of assumptions (hence a weaker asymptotic result) than in the existing literature, but this allows to get better parametric rates.
This might not be the most breathtaking paper, but the reviewers and myself find it sufficiently interesting to be accepted at NeurIPS.
Congratulations ! | train | [
"lvu6W2CKFPE",
"h2091y7giwk",
"12W-a6qSun",
"MsrhMKvdx7j",
"SLFrIITZ3yY",
"qZDhzxZGR0Z",
"6UAlOV0asS",
"6JbO2KjduAq",
"SAI4Xa3y7L",
"3algeuBdqST",
"kXxikA6Sogq",
"VuQRq1NJWBd"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I think the authors have properly addressed my questions. It would be very nice if the authors could incorporate those discussions in the revised version. So I will increase my score to acceptance.",
" Thank you for the detailed response. \n\nAlthough I still worry that the techniques used in this paper is somewhat incremental to [ZCP22b], the response indeed address some of my concerns. \nI am willing to revised my score from 4 to 5.",
" Thank you for your careful review, we appreciate your general support for our paper.\n\n1. On the writing. Thank you for your suggestions. We agree with you that our introduction is a bit technical and requires some prior knowledge on parameter-free online learning. We will try to reduce the amount of technical arguments and make it more approachable for a broader group of readers. Also, we will add a section to the appendix surveying the background of this topic. \n\n2. On the terminology. Yes, we totally agree that the name \"parameter-free online learning'' is due to certain historical reasons, and it is kind of an overclaim. Recent works often use \"parameter-free online learning'' and \"comparator-adaptive online learning'' interchangeably. Based on your comment, we think the latter would be more accurate, especially for future research building on our work. We will revise this in the camera-ready version. \n\n3. Questions on the experiment. Both our algorithm and the baseline [ZCP22a] require $G$ and $\\lambda$ as inputs. In our experiment, we feed the correct $G$ and $\\lambda$ to them, thus making the comparison fair. The result qualitatively verifies our theory. However, we did not try incorrect $G$ and $\\lambda$ on these algorithms. This is a bit orthogonal to the focus of this paper, but indeed, we think the robustness of these algorithms is an interesting problem for future research (both theoretically and empirically). ",
" Thank you for your feedback. However, with due respect, we believe the main contribution of our paper and its novelty have been overlooked in your review. \n\n**1. \"Comparison to other online learning papers with switching costs.\"**\n\nOnline learning with switching costs has been studied by many prior works, as we reviewed in Line 101-116. However, in this paper we study **adaptive (parameter-free) algorithms** in this setting. **This is significant and technically nontrivial for many reasons.**\n\n- Non-adaptive algorithms require a bounded domain, and typically they (including [ZJLY21]) use the diameter of the domain to tune the learning rates. However, many practical problems are naturally defined on unconstrained domains where *non-adaptive algorithms cannot be applied*. We not only relax the assumption of knowing the diameter, but also deal with the more practical and challenging setting where the domain is not bounded at all. \n\n- Parameter-free algorithms can take a prior as initialization, and the regret bound naturally adapts to it. This is useful when a good prior can be obtained from domain knowledge or transfer learning. Furthermore, in the LEA setting (Line 85-93), parameter-free algorithms are automatically equipped with quantile regret bounds.\n\n- Parameter-free algorithms with switching costs have important applications in linear control, as reviewed in Line 123-131 and highlighted by Reviewer KR3f's comment. \n\n- Algorithmically, parameter-free algorithms are based on very different principles compared to non-adaptive ones. For example, they typically do not have learning rates. The design of these algorithms is less understood, and especially, the problem becomes even more challenging when switching costs are introduced. Our paper has to deal with the trade-off between parameter-freeness and switching costs (Line 40-51), and it is not clear a priori what is the optimal rate and how to achieve it. \n\n**2. \"Comparison to [ZCP22b].\"**\n\nOur algorithm belongs to the classical potential method in online learning. The main contribution of [ZCP22b] is a continuous-time framework for designing parameter-free potentials without switching costs. **In this paper, we extend their argument in a nontrivial way:**\n\nFor our setting, the only known algorithm [ZCP22a] is not a potential method - it is unclear whether any potential method can achieve our goal. A suitable potential function should incorporate the switching cost weight $\\lambda$ in a smart way. The question is, how to do this? Simply scaling the learning rates does not work, since here we don't have learning rates at all. \n\nIt turns out that bringing the game to the continuous-time limit (Appendix A.1) indeed yields a good potential, and more importantly *gives us an interpretable insight*: the key is to use $\\lambda$ to scale the sufficient statistic, and the scaling order also naturally emerges. In other words, besides the argument from [ZCP22b], we further show that\n\n**- the continuous-time framework could design potential functions for challenging settings where no potentials have been known to work;**\n\n**- this framework not only is quantitative, but also generates interpretable insights.** \n\nWe believe these features are nontrivial contributions to the continuous-time framework for adaptive online learning [ZCP22b]. \n\nAs for the techniques, we only use the framework of [ZCP22b] to derive the continuous-time potential. Verifying this potential in discrete time is a different task - we need to use a more conservative $\\alpha$ than what is suggested in continuous time. Moreover, the two key lemmas (Lemma 2.2 and 2.3) for the analysis are technically nontrivial and not contained in [ZCP22b]. \n\n**3. \"$L_2$ switching cost.\"**\n\nThe prior work [ZCP22a] considers $L_2$ switching cost. Its Alg.2 is a reduction from the general dimensional problem to 1D, based on the standard polar-decomposition trick. The idea is to parameterize the high-dimensional space using a direction and a length - the direction can be learned using gradient descent on the unit $L_2$ norm ball, and the length can be learned using the 1D parameter-free algorithm on $\\\\mathbb{R}_+$. \n\nOur algorithm directly improves the 1D algorithm of [ZCP22a], therefore can also use the above technique to handle higher-dimensional $L_2$ switching costs. The only required modification is an extra regularization term. That is, we have to bound\n$$\n\\sum_{t=1}^Tg_t(x_t-u)+\\lambda\\\\sum_{t=1}^{T-1}|x_t-x_{t+1}|+\\sum_{t=1}^T\\frac{\\lambda}{\\sqrt{t}}|x_t|.\n$$\nTo do this, we can define a surrogate loss $l_t(x)=g_tx+\\lambda t^{-1/2}|x|$, and then run our algorithm on the linearized version of $l_t$. It is not hard to verify that the regularization term can also be controlled in this way. We will add this extension to the camera-ready version. \n\nHope our response has addressed your concerns. If so, we would appreciate it if you update your rating. We would also be glad to answer further questions. ",
" Thank you for your feedback and your support for our paper. \n\n**1. Question on Line 156-163.**\n\nThe argument we aim to make is the following. Suppose we can show that the two terms in Eq. (2) *separately* satisfy the parameter-free bound. Then, it means that for all $u\\in\\mathbb{R}$, \n\\begin{equation*}\n\\sum_{t=1}^{T-1}|x_t-x_{t+1}|\\leq 1+|u|\\tilde O(\\sqrt{T}).\n\\end{equation*}\nThe left hand side only depends on the operation of the algorithm, and *does not depend on the comparator $u$*. Therefore, in order for the above to hold for all $u\\in\\mathbb{R}$, we must have $\\sum_{t=1}^{T-1}|x_t-x_{t+1}|\\leq 1$, which is clearly very conservative. \n\n**2. \"Sensitivity to $C$ and $\\alpha$.\"**\n\nThe dependence on $C$ is quite standard in parameter-free online learning. Setting it to an arbitrary constant (e.g., 1) is already good enough for theoretical purpose and many practical applications [CLO20]. \n\nAs discussed in Appendix A.1, the ideal $\\alpha$ suggested by the continuous-time analysis is $1/2+\\lambda$. We use a larger one just to control the discretization error (Lemma 2.3), but this is a sufficient condition rather than a necessary one. In practice we speculate that setting it slightly lower would still lead to good performance. \n\n**3. \"Computational efficiency.\"**\n\nYes, computing the discrete derivative is easy and fast. The potential function in its double integral form may seem ugly, but due to Line 764, we can rewrite it without integrals, using the *imaginary error function* $\\mathrm{erfi}$. Evaluating the latter is easy in standard software packages like scipy. \n\nFor general $d$-dimensional problems, we run a 1D algorithm on each dimension. This is the same idea as AdaGrad, and isn't hard to compute. Furthermore, we can use a simple polar-decomposition technique as in [ZCP22a], such that we only need one 1D algorithm instead of $d$ copies of it.\n\n**4. Notation**\n\nWe apologize for the confusion, $||\\cdot||_*$ means dual norm. We will add a definition to it. ",
" First, thank you for your insightful comments!\n\n**1. \"The baseline can achieve $O(\\sqrt{\\lambda})$ rate through a mini-batching technique.\"**\n\nThis is completely correct, thank you for spotting it! We were focusing on the comparison to the vanilla form of the baseline (Alg.1 of [ZCP22a]) and overlooked the extension in Alg.3 of the latter. This is our fault, and we will correct it in the final version. Our two main improvements over the baseline still hold, i.e., Pareto-optimal loss-regret trade-off and the overall $O(\\sqrt{T})$ rate (even if one insists on constant regret at the origin, by setting $C$ appropriately we improve the logarithmic factor of [ZCP22a] from $\\log(T)$ to $\\sqrt{\\log(T)}$, which is optimal). \n\n**2. \"Relation to [ZCP22b].\"**\n\nOur paper and [ZCP22b] have quite different scopes, therefore their comparison is not emphasized in the Introduction. Specifically,\n\n**Setting.** [ZCP22b] considers the parameter-free OLO problem *without switching costs*, therefore it does not face the challenge of trade-offs described in Line 45-51. Instead, [ZCP22b] considers the same setting as [MO14] and [OP16], but with improved bounds. Line 94-100 in our paper surveys this line of work. \n\n**Contribution.** [ZCP22b] makes two main contributions: ($i$) improving the standard parameter-free OLO bound [OP16]; and ($ii$) proposing a continuous-time framework to design parameter-free potentials. Their first contribution is not applicable to us since our setting is different. **In this paper we further develop their second contribution in a nontrivial manner, as described below.**\n\nFirst, without switching costs, the potential method is a popular strategy for parameter-free online learning, and [ZCP22b] provides a good way to design these potential functions. However, when switching costs are considered, it is not so clear whether the potential method is still effective - the existing algorithm [ZCP22a] is not a potential method. Suppose a potential method works in this setting, then apparently it needs a good way to incorporate the switching cost weight $\\lambda$. The question is, how to do this? Simply scaling the learning rates (as in the minimax analysis of switching costs) does not work, since here we *don't have learning rates at all*. \n\nIt turns out that bringing the game to the continuous-time limit (Appendix A.1) indeed yields a good potential, and more importantly *gives us an interpretable insight*: the key is to use $\\lambda$ to scale the sufficient statistic, and the scaling order also *automatically* emerges. In other words, \n\n**- the continuous-time framework could design potential functions for challenging settings where no potentials have been known to work;**\n\n**- this framework not only is quantitative, but also generates interpretable insights.**\n\nWe find these features quite remarkable, and nontrivially extend the argument from [ZCP22b].\n\n**Technique.** As discussed in Appendix A.1, our derivation of the potential function has a similar intuitive flow as [ZCP22b]. However, there are major deviations when we convert the analysis back to discrete time, discussed in Line 602-607. We need to set $\\alpha$ to a more conservative value. Moreover, the key Lemmas 2.2 and 2.3 are technically nontrivial, and not contained in the prior work.\n\n**3. \"Intuition behind the continuous-time analysis and the potential (3).\"**\n\nOur analysis has the following intuitive flow. We start from Eq (5) in Line 568 - this is a discrete-time recursion characterizing a class of good potentials. *Any* function $V$ that satisfies this recursion would yield a regret bound, therefore our goal is to solve this recursion and find the right potential function among the solution class. However, solving this discrete-time recursion in closed-form is a challenging task, therefore instead of pursuing exact solutions, we pursue approximate solutions in continuous time. This still leads to regret bounds, as long as we characterize the approximation error. \n\nWe approach continuous time by scaling the unit time and the gradient space in Eq (5), and this gives us a PDE (Line 588). Quite surprisingly, this PDE is the same type as the one from [ZCP22b] but with a different coefficient. Put in another way, incorporating the switching costs has such a simple quantitative effect in the continuous-time limit, which is quite hard to observe in the original discrete-time setting.\n\nFinally, we use a *change-of-variable* to transform our PDE into the one from [ZCP22b]. This naturally yields the interpretable insight of *dual space scaling*. \n\n**4. \"Extension to non-stochastic control.''**\n\nYes, our algorithm can be used to construct strongly adaptive algorithms for OCO with memory. A couple of linear control algorithms can stem from there. We also appreciate your suggestion on writing. Adding a discussion on nonstochastic control can indeed motivate our problem better. ",
" Thank you for your feedback, we appreciate your support for our paper. \n\nRegarding the novelty of our paper, we respectfully disagree with your comment that it is incremental compared to prior works. Essentially, parameter-freeness and switching costs are two opposite considerations we need to trade-off. Such trade-offs in online learning are quite subtle, and it is generally unclear where the achievable boundary is (Line 45-51). The prior work [ZCP22a] only achieved a suboptimal trade-off using a quite complicated strategy, which isn't satisfactory both theoretically and empirically. In this paper we show that a much simpler strategy, when designed carefully, can indeed improve the bound to the optimal rate. Moreover, it reveals an elegant insight: a general way of incorporating switching costs is to scale on the dual space. We believe this is a nontrivial contribution to the field. \n\nRegarding your question, \"how much does the bound worsen if only an upper bound on $G$ is known?'' Our bound would scale linearly with the given Lipschitz constant $G$. However, we may do better by also considering *the adaptivity to the observed gradients* - this is an interesting open problem for future research. When there are no switching costs, there exist parameter-free algorithms (e.g., [MK20]) that ($i$) do not require a given $G$; and ($ii$) guarantee $\\tilde O(|u|\\sqrt{\\sum_{t=1}^T|g_t|^2})$ bound instead of $\\tilde O(|u|G\\sqrt{T})$. The problem becomes trickier when we add switching costs, as hinted by some negative results [Gof14]. It is unclear where the achievable boundary is. ",
" The authors study online learning in a parameter-free setting. These settings have the benefit of being applicable to settings where the domain is unbounded and where prior information is not available. They generalize some existing algorithms to include switching costs. Switching costs can model a scenario where the learner is incentivized to not change its prediction significantly. They give an algorithm for Online linear optimization that is more or less optimal in several useful metrics. They also experiment on both synthetic datasets as well as real-world datasets, surpassing the baseline significantly.\n Originality & Significance - The models and framework seem to be standard. The work done in the paper seems to be original to the best of my limited knowledge. Related to the prior work, I find the current work to be incremental and is of moderate significance.\n\nQuality & Clarity - The paper is well-written and the exposition is easy to follow. The quality of writing is above-average and the arguments made in the non-technical parts of the paper were cogent. Due to my lack of domain knowledge and time constraints I did not verify all technical details of the paper.\n The result for Algorithm 1 is when G is exactly known. Quantitatively, how much does the bound worsen if only an upper bound on G is known?\n None.",
" This work studies a meaningful problem called online linear optimization with switching costs. Previous progress on this problem, [ZCP22a], obtains a suboptimal regret bound through a complicated coin betting algorithm. This work improves the previous result in several aspects with a simpler and more elegant algorithm based on some specially designed potential functions. This work also extends the above result to higher dimensions and bounded domains. Besides, in LEA with switching cost, with more structural information, this work proposes a novel reduction from bounded to unbounded domains, which improves the previous bound even without the existence of switching cost. Finally, empirical studies evaluate the effectiveness of the proposed method. Strengths:\n\n1. The problem of OLO with switching cost is meaningful due to its deep connection with some online decision-making problems, such as online non-stochastic control.\n\n2. The main algorithm, i.e., Algorithm 1, is simple and elegant. And the corresponding theoretical result improves the suboptimal one in [ZCP22a] in multiple aspects.\n\n3. The result in LEA with switching cost is novel, which also imports some novel ideas in the LEA-OLO reduction, even without the existence of switching cost. The illustration in Figure 1 is clear enough.\n\nWeaknesses: please mainly refer to the “Questions” part below.\n 1. Actually, [ZCP22a] already obtained an optimal dependence on $\\lambda$, i.e., $O(\\sqrt{\\lambda})$, through a mini-batching technique (see Algorithm 3 in [ZCP22a] for more details), but the authors do not mention it in the draft. Can the authors explain about this?\n\n2. The work [ZCP22b] seems to share much in common with this work, such as potential functions. While in my opinion, the authors did not sufficiently compare the difference between this work and that of [ZCP22b], which is inappropriate. Can the authors give a more detailed comparison of their work and [ZCP22b], in terms of both problems and techniques?\n\n3. Although the main algorithm, i.e., Algorithm 1, is pretty simple and thus elegant, it is not intuitive enough, at least for me. For example, what is the intuition behind the potential function in (3)? The authors said that there is a corresponding analysis in Appendix A.1. Can the authors explain more about the intuition that guides the algorithm design?\n\n4. I guess the extension to online non-stochastic control as in [ZCP22a] is straightforward by reducing the control problem to OCO with memory with a particular policy parametrization?\n\n5. The problem of OLO with switching cost is meaningful due to its deep connection with some online decision-making problems, such as online non-stochastic control. As a suggestion, the authors can make this connection clearer in the main paper, at least in the introduction part.\n Not much.",
" The authors design an algorithm for online linear optimization (OLO) with switching cost, and prove the optimality of the proposed algorithm, as an improvement of an existing algorithm. The trade-off between adaptivity and switching cost, as well as an extension of the algorithm to the learning with expert advice (LEA) are discussed. Strengths:\n1. While the problem of OLO with switching cost is not new, the authors improve the result on the regret bound, and show the optimality of their result in several forms.\n2. The key idea that incorporating switching costs by scaling on the dual space seems new and interesting.\n3. Overall the paper is well-organized and mostly easy to follow.\n\nWeakness:\n1. As authors point out, the result requires the knowledge of the Lipschitz constant G and assumes $\\lambda$ is time-invariant, which might limit the application of the method in practice.\n2. Please see my additional questions/concerns in the next part.\n 1. I am a bit confused by the argument from line 156 to 163. Suppose the sum of switching cost is $1+|u|O(\\sqrt{T\\log(|u|T)})$, then this \"budget\" reduces to $O(1)$ only when $u=0$, which implies the comparator is not far away from the initial $x$. For far-away comparators, the budget grows at $O(\\sqrt{T})$. I do not see how the conclusion is drawn.\n\n2. In Algorithm 1, the authors set a hyperparameter $C$, and there is also another parameter $\\alpha$ in the potential function. Will the performance be sensitive to these two parameters? Should one always choose $\\alpha = 4\\lambda G^{-1}+2$ in practice?\n\n3. Is it easy and fast to compute the discrete derivative $\\bar{\\nabla}_SV(t,S)$ in Algorithm 1, when the dimension is not small?\n\n4. What does $||\\cdot||_*$ stand for? (e.g., in line 94) The authors adequately discuss the limitation and potential improvement for the paper.",
" This paper studies the problem of parameter-free online learning in the presence of switching cost. Based on a novel dual space scaling strategy, the authors obtain the optimal regret bound for unconstrained OLO with switching cost. The results are also extended to the expert setting, and numerical results on portfolio selection problem are reported. Strengths: \n\n1. The proposed algorithm is parameter-free and improves the suboptimal result in [ZCP22a] to the optimal rate. The results can also extend to the expert setting.\n\n2. This paper is clear-written and technically sound. The theoretical analyses are supported by rigorous proofs which are sound to me.\n\nWeakness:\n\nMy main concerns are contribution and novelty.\n1. As for contribution, since there already exist methods such as [ZJLY21], that achieve optimal $\\mathcal{O}(\\sqrt{T})$ (dynamic) regret for OCO problem. Although the proposed method in this paper does not require the diameter of the domain, this improvement seems not significant enough.\n\n2. As for novelty, the proposed method and the proofs are not very different from [ZCP22b], which have already been mentioned in the paper. So, it seems to me that the method is not novel enough. In this paper, the switching costs are in the form of $L_{1}$ norm and the authors claim that the proposed strategy can be extended to other norms. Since $L_{2}$ norm or $L_{2}$ norm squared are also widely used for switching costs, I wonder how to extend the strategy to $L_{2}$ norm, especially for higher-dimensional domains? \n\n(I mean, for higher dimensions, the authors run Algorithm 1 on each coordinate separately to solve it. This is valid for $L_{1}$ norm, since the $L_{1}$ switching cost is the sum of the cost of each coordinate. However, would it be problematic for $L_{2}$ norm or $L_{2}$ norm squared?) No problems here.",
" This theoretical article is about \"parameter-free\" regret minimization for (linear) online learning with switching cost\n\nFor non-specialists like me, this appealing notion of \"parameter-free\" needs to be clarified somehow. When analysing an algorithm like stochastic gradient descent, its convergence time upper bound is often given according to some hyper-parameters like the learning rate. The best convergence time is then obtained by setting the learning rate that minimizes this convergence time. If this learning rate is unknown, tuning this parameter has a cost which impacts the convergence time. It has been shown that his cost depends on the square root of the logarithm of the distance between the initial point of the optimization and the (unknown) target.\nIt seems however that some \"parameters\" of the problem, like the Lipschitz constant G, that are required by the algorithms are not taken into account in this definition. In other words \"parameter-free\" means that we integrate the cost of tuning -- some -- parameters in the analysis.\n\nThe ability for an online algorithm to handle switching costs is appealing for practical applications where mixing policies is costly or not desirable. By essence, a parameter-free algorithm has to be more adaptable hence more reactive than a classical algorithm, but the switching cost favors algorithms with stable behaviours. This leads to a balance\n\nThe key contribution of this article is to provide a new dual approach that improves the parameter-free regret bound provided in [Zhang et al. 2022a] for this switching cost setting.\n\nThe Online Linear Optimization algorithm is then transposed to solve the Online Learning with Expert's Advices problem (with switching cost). And it is evaluated through simulations on a portfolio selection problem with transaction costs. It compares favorably against the [Zhang et al. 2022a] baseline.\n\nThe proofs are detailed along 20 pages of supplementary material. Some code is provided as well. Weaknesses / General remarks:\n- The paper is far from being self-contained. It has been written for researchers who are already well read on \"parameter-free\" online learning. As a newcomer I had to follow a full tutorial on the subject to understand the introduction. I think the author should make an effort to write a more readable introduction. Maybe by providing a few concrete historical examples of parameter-free algorithms.\n- The \"parameter-free\" appellation used by this community is \"historic\" but still a bit of an overstatement for an algorithm, like Algorithm 1, which requires to know or guess the Lipschitz constant G and the switching cost lambda parameter in order to work.\n\nStrengths:\n- The integration of switching costs in the Online Learning with Expert's Advices (LEA) problem is an important problem both from the practical point of view and from the theoretical point of view.\n- An efficient online learning algorithm able to cope with this problem with as few parameters as possible is a clear contribution (even if G and lambda are still required).\n- Once into it, the paper is correctly written, and the maths seem solid (but I did not check the proofs).\n- The author provided a code to reproduce their experiments in supplementary material.\n The algorithm being not really parameter-free, why not change the title to \"Optimal almost-parameter-free Online Learning with Switching Cost\" ?\nYou set the G and lambda parameters respectively to G=1 and lambda=0.1 in your synthetic market experiments. Did you inform your algorithm of these parameters beforehand ? Did you try running your algorithm with the wrong parameters ?\nDo the coin betting strategy proposed in [Zhang et al. 2022a] require such parameters ? Is the comparison fair on that ground ?\n The limitations are explicitly given in the conclusion.\nThe authors underline that the Lipschitz constant G and the time-invariant switching cost lambda are required by their algorithms. Hence admitting that their algorithms are not really \"parameter-free\". \n\nAdapting to changing costs and dropping the G parameter is left as future work.\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
4,
3,
4,
1
] | [
"qZDhzxZGR0Z",
"MsrhMKvdx7j",
"VuQRq1NJWBd",
"kXxikA6Sogq",
"3algeuBdqST",
"SAI4Xa3y7L",
"6JbO2KjduAq",
"nips_2022_Vg_02McCRnY",
"nips_2022_Vg_02McCRnY",
"nips_2022_Vg_02McCRnY",
"nips_2022_Vg_02McCRnY",
"nips_2022_Vg_02McCRnY"
] |
nips_2022_3uj_8G7fxgs | Multi-objective Deep Data Generation with Correlated Property Control | Developing deep generative models has been an emerging field due to the ability to model and generate complex data for various purposes, such as image synthesis and molecular design. However, the advance of deep generative models is limited by the challenges to generate objects that possess multiple desired properties because: 1) the existence of complex correlation among real-world properties is common but hard to identify; 2) controlling individual property enforces an implicit partially control of its correlated properties, which is difficult to model; 3) controlling multiple properties under variour manners simultaneously is hard and underexplored. We address these challenges by proposing a novel deep generative framework that recovers semantics and correlation of properties through disentangled latent vectors. The correlation is handled via an explainable mask pooling layer, and properties are precisely retained by the generated objects via the mutual dependence between latent vectors and properties. Our generative model preserves properties of interest while handles correlation and conflicts of properties under a multi-objective optimization framework. The experiments demonstrate our model's superior performance in generating objects with desired properties. | Accept | All three reviewers argue to accept (albeit one borderline).
Extremely substantial response from the authors addressing individual reviewer comments, which led to reviewers raising their scores, and to a much revised paper with new experiments. This effectively led to a second round of review, with engaged reviewers who confirmed their concerns have largely been met. | train | [
"v-1OuWhUlq",
"gmI6fhZ_9Au",
"9l60dqpdrI-",
"vUIthqPDPTj",
"JfblpkrDxk",
"i4BDPd-kdgP",
"MV4Wb8WcfezO",
"jmu7w7SxDQc",
"sTiNmqqj8UP",
"4crt81Gj3iO",
"Ii9mQgzUXkyT",
"2TzjVAsWa-w",
"E64hFLLC5eP9",
"ZS6cvsv2QMi",
"veiF-WnHrWP",
"3gnEy8lYbDS",
"m3Mgv9w3Xtt",
"smBe3JwDx2b",
"GU0VImi3IEs"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" My comments have been properly addressed.",
" We sincerely thank the reviewer for approving our clarification and we are glad to further discuss the evaluation of generated data.\n\nEvaluation metrics include novelty, uniqueness, diversity, validity and similarity to original distribution such as Fréchet Inception Distance (FID). There are some trade-off among them. For example, a totally random data generator could have a very good novelty, while a data copier can follow the training distribution and maintain validity perfectly, but typically neither of them is considered as good enough. More commonly we want a trade-off between these two extremes and we may prefer different ways of the trade-off for different applications. Sometimes we have some implicit criteria (e.g., the generated shape needs to be convex but we are fine if it has a new orientation) in our human mind but the model may not always guarantee this for free unless we “tell” these criteria explicitly to the model (e.g., by adding constraints or property controlling). Also, we agree that generated properties should be somewhat close to the training data, and the “close” here could be “similar to a specific training sample in some aspect but not the others” or “similar to the hybrid of several training samples” or more generally “similarity between training data distribution and generated data distribution”. For example, when our training set contains the shapes like square, circle, and oval, then the generated one could look like a square, circle, oval, or a hybrid of them, or at least should be a convex shape with high probability, and very unlikely to be a bar shape because its corresponding probability to be sampled is low and far away from the “mean” we learned for this generative model.",
" We sincerely thank the reviewer for the suggestion to compare with Bayesian optimization-based model and the patience. We have conducted additional experiments by predicting properties with the whole $w$ and performing property control via Bayesian optimization. In this case $w’$ and the mask layer are dropped. The results that compare CorrVAE and the Bayesian optimization-based model (BO) are shown in the table below, and are updated as Appendix Table 8 in the paper.\n\n| Model | dSprites | | Pendulum | |\n|---------|----------|--------------|----------------|-----------------|\n| | size | x+y position | light position | shadow position |\n| BO | 0.0033 | 0.0062 | 19.2387 | 17.2858 |\n| CorrVAE | 0.0016 | 0.0066 | 15.3900 | 6.0250 |\n\nBased on the results, CorrVAE achieves smaller MSE on both light position and shadow position of Pendulum dataset. Specifically, for light position, MSE achieved by CorrVAE is 15.3900 which is much smaller than 19.2387 obtained from the Bayesian optimization-based model. For shadow position, MSE achieved by CorrVAE is 6.0250 which is much smaller than 17.2858 obtained from the Bayesian optimization-based model. Besides, on dSprites dataset, CorrVAE achieves the MSE of 0.0016 for the size which is smaller than 0.0033 obtained from the Bayesian optimization-based model. In addition, CorrVAE achieves comparable results on x+y position with the Bayesian optimization-based model. The results indicate that CorrVAE has better performance than the Bayesian optimization-based model on controlling independent variables (i.e., size in dSprites, light position in Pendulum) and correlated properties (shadow position in Pendulum).\n\nWe will add detailed discussion in the paper.",
" I understand the argument, but on the other hand, results obtained by beta-vae [a] and subsequent works [b,c] tends to indicate that it is not so difficult to encode pos_x and pos_y into a single value each without supervision at all.\n\nBut this is only a minor concern.\nAfter reading through the submission again to reassess it, I'm left with a question: how do we properly evaluate the task?\nIndeed, the authors suggest that they are happy with properties that are *changeable, diverse, or even unseen before in training data*. But certainly, they still want the generated properties to be somewhat close to the training data, or else they would be no point to a data-driven approach at all.\n\nAn other way to look at it is that, some constrains are hard-set by the experimenter (positions in the example), and others (like shapes) are soft constrains influenced by the dataset and the deep model.\nGiven that the later are allowed to some degree to be unseen from the training data, how can we tell if a given level of adherence to the dataset in the generated samples is a desirable feature or an hindrance?\nIt seems to me that to evaluate those methods, we would need either an actual downstream task, or that the model provides a control over the degree of adherence to the dataset.\n\nI realize that this comes very late in the discussion process, but I would appreciate if the authors could comment on these views.\n\nIn any case, I also understand that this potential problem is inherent to the definition of the task and not easily solved.\nIn the meantime, I updated my rating from 3 to 6.\n \n--- \n[a] beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. Irina Higgins et al. ICLR 2017.\n\n[b] Isolating Sources of Disentanglement in Variational Autoencoders. Tian Qi Chen et al. NeurIPS 2018.\n\n[c] Disentangling by Factorising. Hyunjik Kim and Andriy Mnih. ICML 2018.",
" Thanks for your question and our further clarification is as follows.\n\nIn our objective we do have loss (i.e., L1 norm regularization) that encourages to select fewer variables from $w$ for each $w’$. But in the meanwhile we also have other losses that measure the difference between the training data and generated data. The optimization needs to achieve a trade-off among all the losses. We agree that “one variable $w_i$ to control x while a pair of variables that include $w_i$ to control x+y” will theoretically have a lower L1 norm regularization loss. However, this theoretical and perfect situation happens only when the image has no noise and the encoder of variational auto-encoder perfectly-precisely identifies the shapes and calculates out the ground-truth center positions of all the shapes. But in practice, both image with no noise and perfect encoder doing perfect object encoding for its detection are very hard to achieve. As a result, the model needs to work with noise related to x and x+y. Hence, one additional variable for x is learned to capture the additional difference between x and x+y caused by the noise, and thus helps decrease the loss that measures the difference between the training data and generated data.",
" I'd like to thank the authors for the very substancial rebuttal.\n\nI believe my concerns have been adequatly adressed. Crucially, the answers corrected a miss-conception from my part as to the goals and motivations of the paper. The additional figures completed the previous results nicely, providing more information about the different components of the model.\n\nI still have a minor comment regarding response #3: \n*(2) The fact that \"x position\" and \"x+y position\" attributes are diluted into two variables indicates “x position” and “x+y position” are not 100% colinear and are also not 100% independent. This is why we see each of them has two variables in w, where one variable is shared between them while the remaining one of each of them is not shared.*\n\nI agree with this statement. What I meant that it would be very intuitive to encode \"x position\" as one value $w_i$, and \"x+y position\" as a pair of value that includes $w_i$ (or vice-versa). Wouldn't this configuration be better in terms of loss?\n",
" We appreciate so much for reviewer's comments and feedbacks that made our paper further improved while we were addressing the concerns. We are also glad that reviewers approved our work regarding important tasks that we solve.\n\\\n\\\n**Comment #1**: In Table 1, although the proposed method dominating the others, it would be great to provide more analysis about the results why sometimes the degree of superiority is very large while sometimes its performance is close to the others. Why CorrVAE has worse performance than CSVAE on two independent variables x position and scale?\\\n**Response #1**:\\\n(1) We have refined our discussion of Table 1 by adding more details in the main paper (line 323-342): \n\\\n\\\n\"We evaluate the learning ability of the proposed model and ... CorrVAE-2 models $w$ to $w′$ using simple linear regression, which cannot capture the non-linear correlation among properties that might exist in the dataset\".\\\n\\\n(2) CSVAE models the distribution of $w$ given properties $y$ via a simple Gaussian MLP whereas CorrVAE needs to learn the whole variance-covariance matrix (Eq. (9)) and the the mapping function from $w$ to $w'$ (Eq. (5)) if correlation exists among properties. Hence, CorrVAE needs to learn more parameters than CSVAE. This will lead to a comparable or slightly worse performance given the potential overfitting if the pattern of the dataset is linear and simple to learn (e.g., shape or x position for dSprites dataset). As shown in Figure 1, CorrVAE works much better than CSVAE if more complex settings exist, such as complex patterns of the dataset (e.g., Pendulum) or there are correlation among properties since in this case the correlation contributes to the prediction of properties and the complex data structure needs more parameters to learn. CSVAE cannot well capture the correlation among properties even in the dSprites dataset, which can be validated from the results in Table 1 in that it has a MSE of 0.3563 in predicting x+y postion which is much worse than 0.0066 obtained from CorrVAE.\n\n**Comment #2**: The white text on Figure 3 and Figure 4 can be larger to be more readable. The fond size of Table 1 and Table 2 should be aligned.\\\n**Response #2**:\\\nThank you for the valuable suggestions. We have enlarged the font size of our Figure 3 and Figure 4.\n\n**Comment #3**: Missing references: For example [1] below can be discussed in the Related works session.\\\n**Response #3**:\\\nThank you for the valuable suggestions. We will surely discussed the reference in our revised paper.",
" Thank you very much for your detailed summarization and insightful comments. Please find our answers to your comments/questions below. We have updated our paper based on your suggestions. The summary of updates in the paper are listed in a separate comment on top of the webpage. If you have any further comments/suggestions on the updated version of our paper, we will be glad to improve on them. We also sincerely hope that the revised version and responses could help with an increase in the score.",
" **Comments #7**: Overall, I believe the paper overclaims what is the proposed model able to do. It indeed preserve the controlled attributes, arguably better than the baselines, but seemingly at the cost of loosing the other attributes, even some as fundamental as shape or orientation. I could change my opinion if the authors can provide evidence that this assessment is wrong (by answering the questions for instance), or if they can show that the trade-off is a desirable feature.\\\n**Response #7**: \\\nThanks for the opportunity for us to clarify. We have added the clarification and new experiments as introduced in the following:\\\n(1) In controllable data generation, our intention is to preserve the controlled properties while making other properties changeable, diverse, or even unseen before in training data. For example, in controllable drug molecule designing, we have two purposes: 1) make sure the generated molecules stick to the required properties of interest. 2) encourage the diversity and novelty of the molecules by varying all the other uncontrolled properties in order to achieve novel drug discovery. Similarly, when generating object images, suppose our controlled properties of interest are only objects' size and position, then analogously we want all the other properties (e.g., shapes and orientations) to be changeable and we'd love to see they are different from the training data (e.g., shapes and orientations unseen before in training data). In all, our paper achieves both 1) preserving controlled properties and 2) perturbating the uncontrolled properties, both of which are highly desired.\\\n(2) In addition, to show the versatility of our method, we have added new experiments where we can preserve the shape, by treating \"shape\" as controlled property. Specifically, in Figure 3 (a) of the main paper, we have shown that when we control “shape” via w and mask, the shape of the objects changes when traversing the corresponding variable in w that controls it. Meanwhile, in Figure 4 of the main paper, we have shown that “shape” can also be well controlled by the constraints of the multi-objective optimization framework. We also visualized the whole batch of images in Appendix Figure 2 generated based on each property constraint of Figure 4 to show that our experiments are consistent and replicable.\\\n**Comments #8**: Minors typos: l199: thrid l183 w_i^T · w_j : w_i l326-327: it seem it should be CorrVAE-2 insteand of CorrVAE-1? Vertical spacing after subsection titles are very unusual.\\\n**Response #8**:\\\nThank you for pointing out these typos. We have corrected all of them in the revised paper",
" **Comments #5**: While the authors claim that the ablation CorrVAE-1 that use ground truth masks are achieving better performance than CorrVAE, it is not clear in the result Table 1. This could also indicate that the model is not working as expected and that different variables in the model do not exactly capture the information they are intended to get. Why would CorrVAE-1 would be slightly worse than CorrVAE on some task? How significant is the difference?\\\n**Response #5**: \\\nFor correlated properties, CorrVAE-1 did outperform CorrVAE, which is what our technique designed for and hence demonstrated our effectiveness. For independent properties case, both CorrVAE-1 and CorrVAE perform similarly which indicate that the ground truth mask does not help a lot for such simple situation. More specifically:\\\n(1) For correlated properties, CorrVAE-1 did achieve better performance, such as “x position” and “x+y position” for dSprites dataset and “shadow length” and “light position” for Pendulum dataset, which is aligned with the results in Table 1. For example, MSE of “x position” is 0.0059 for CorrVAE-1, which is better than 0.0077 achieved by CorrVAE. MSE of “x+y position” is 0.0023 for CorrVAE-1, which is better than 0.0066 achieved by CorrVAE. The MSE of “shadow length” and “light position” is respectively 2.4626 and 11.2878 for CorrVAE-1 compared with 10.26 and 15.39 for CorrVAE. \\\n(2) For independent properties, CorrVAE-1 would be slightly worse but comparable with CorrVAE since CorrVAE and CorrVAE-1 share the same technique in capturing the independence among properties. For instance, CorrVAE-1 and CorrVAE have comparable results in controlling “size” with value 0.0024 and 0.0016, respectively, as shown in Table 1. Similarly, for pendulum angle and shadow position of pendulum dataset, CorrVAE and CorrVAE-1 also have comparable results with the value 39.9255 and 36.3700 respectively for pendulum angle, and 6.3579 and 6.0250 respectively for shadow position, as shown in Table 1 We have added the corresponding discussion in the main paper line 325-346.\\\n(3) While for the “slightly worse” issue for independent properties, this is because the enforced supervision of “ground-truth” mask distracts the focus of the model a bit from “learning independent properties” from “learning corrected properties”. Fortunately, the decrease for independent properties of CorrVAE is small enough to ignore.\\\n**Comments #6**: Important implementation details are in Appendix C. At the very least, the fact that the mask is encouraged to be sparse should be mentioned in the main paper. I would argue that approximations and relaxation of the problem, such as Monte-Carlo, Gumbel-SoftMax and Spectral Normalization should also be mentioned in the main paper when used as they are not exact implementations of the provided formula.\\\n**Response #6**: \\\nThank you for this valuable suggestion. We have added the detailed usage and implementation of the mask matrix to the main paper (line 170- 172)\\\n",
" **Comment #3**: The authors discuss in section 5.4 and show in Fig 3 latent traversals of w. However, it would appear that the w space is not that interesting, as they do not align well with w' and y. For instance, both the \"x position\" and \"x+y position\" attributes are diluted into two variables (figure 5). This is especially surprising since it increases the KL term in Eq3, and the mask sparcity loss. Meanwhile, the arguably more interesting variable w' that would be used for attribute manipulation is barely investigated. Can the authors provide qualitative samples for latent traversal of w', and quantitative figures on how well the attributes are retained?\\\n**Response #3**: \\\n(1) The w space is interesting in that it contains independent latent variables to aggregate to w’ to control the corresponding y. If there exists one variable in w that contributes to two properties only when those two properties are correlated with each other.\\\n(2) The fact that \"x position\" and \"x+y position\" attributes are diluted into two variables indicates “x position” and “x+y position” are not 100% colinear and are also not 100% independent. This is why we see each of them has two variables in w, where one variable is shared between them while the remaining one of each of them is not shared. This exactly reflects the rationale of the controllability of our model. In addition, this will not increase the KL term in Eq.3. The KL term is not relevant to the mask matrix, which aggregates information from w to w’, but aims to approximate the posterior of $q_{\\phi}(w, z|x)$. Instead, well captured correlation among properties will decrease Eq. 3 and compensate for the possible increase of the mask sparsity loss (L1 norm) in the objective function since the second term of Eq. 3 corresponds to the negative log likelihood of predicted properties. \\\n(3) We have added experiments in Appendix Figure 6 to qualitatively visualize the latent traversal of w'. We can see that the property will change accordingly as we traverse the corresponding w’. The attributes are annotated at the right top corner of each generated image. We also added relevant discussions in Appendix line 98-105 as:\n\\\n\\\n“Moreover, we also traverse the latent variables in $w'$ by simultaneously traversing on latent variables in $w$ corresponding to the associated $w'$ and visualize how the relevant property changes in Appendix Figure 6 . As is shown in Appendix Figure 6 (a), the shape of the pattern changes from ellipse to square as we traverse on $w_1'$. In Appendix Figure 6 (b), the size of the pattern shrinks as we traverse on $w_2'$. In Appendix Figure 6 (c), the \\emph{x position} of the pattern moves from left to right as we traverse on $w_3'$. In Appendix Figure 6 (d), the y position of the pattern moves from top to bottom as we traverse on $w_4'$. In Appendix Figure 6 (e), the x position, y position and x+y position of the pattern simultaneously change as we traverse on $w_4'$, where x position moves from left to right, y position moves from bottom to top and x+y position increase.” \n\\\n\\\n**Comment #4**: I would also like to see the full mask M in Figure 5. What about \"y position\" for instance?\\\n**Response #4**: \\\n(1) The mask M in Figure 5 in the original version is the full mask regarding three target properties of interest (i.e., x position, x+y position, and size). \\\n(2) In addition, by following your suggestion to explicitly control “y position”, we enrich our training data by adding supervision labels of “y” position so we have new mask. The results of this additional experiments are shown below.:\n\\\n\\\n(a) We have replaced Figure 5 with the newly learned mask (The old Figure 5 has been moved to Appendix Figure 4). Figure 5 shows that the newly learned mask matrix well captures the correlation among “x position”, “y position” and “x+y position”, where w3 simultaneously controls “x position”, “y position” and “x+y position”, w4 simultaneously controls “x position” and “x+y position” while w5 simultaneously controls “y position” and “x+y position”.\n\\\n\\\n(b)To validate that the correlation among properties is captured by the learned mask, we visualize generated images by traversing latent variables in w to control corresponding properties according to the mask in Figure 5. We have also updated the relevant discussions in the main paper (line 381-385), as shown below.\n\\\n\\\n“We also evaluate the more complex setting by traversing the value of w3 within [−5, 5] that simultaneously controls x position, y position and x+y position. Not surprisingly, the position of the pattern changes in both horizontal and vertical directions, corresponding to x+y position. At the mean time, x position and y position change accordingly, as shown in Figure 3 (e).”",
" \n**Comment #2**: The samples shown in Fig3 and Fig4 for dShapes are very bad, even in disentanglement VAE models standard. The shape attribute, that should be encoded in the independent z variables, are not only not conserved with attribute manipulation, but also they do not seem to be shapes that are in the dataset. This drastically limits the usefulness and significance of the method for data generation. Can the authors provide reconstruction errors and FID for generated data, for both seen and unseen combination of attributes?\\\n**Response #2**:\\\n(1) It is reasonable and our intention to not preserve “shape” attribute. In controllable data generation, our intention is to preserve the controlled properties while making other properties changeable, diverse, or even unseen before in training data. For example, in controllable drug molecule designing, we have two purposes: 1) make sure the generated molecules stick to the required properties of interest. 2) encourage the diversity and novelty of the molecules by varying all the other uncontrolled properties in order to achieve novel drug discovery. Similarly, when generating object images, suppose our controlled properties of interest are only objects' size and position, then analogously we want all the other properties (e.g., shapes and orientations) to be changeable and we'd love to see they are different from the training data (e.g., shapes and orientations unseen before in training data). In all, our paper achieves both 1) preserving controlled properties and 2) perturbating the uncontrolled properties, both of which are highly desired.\\\n(2) In addition, to show the versatility of our method, we have added new experiments where we can preserve the shape, by additionally treating \"shape\" as controlled property. Specifically, in Figure 3 (a) of the main paper, we have shown that when we control “shape” via w and mask, the shape of the objects changes when traversing the corresponding variable in w that controls it. Meanwhile, in Figure 4 of the main paper, we have shown that “shape” can also be well controlled by the constraints of the multi-objective optimization framework. We also visualized the whole batch of images in Appendix Figure 2 generated based on each property constraint of Figure 4 to show that our experiments are consistent and replicable. The added discussion is shown below:\n\\\n\\\nLine 373-375: “Based on the mask matrix shown in Figure 5, as shown in Figure 3 (a), we traverse the value of w1 within [−5, 5] and the shape of the pattern changes accordingly from ellipse to square.”\nLine 390-405: “The experiments are performed based on the model that controls five properties, shape, scale, x position, y position and x+y position…All properties including shape are roughly aligned with the constraints”\n\\\n\\\n(3) We have added FID values, reconstruction error and negative log likelihood in Appendix Table 5 to quantitatively evaluate the quality of the shape of generated images:\n\n| Method | -log Prob | Rec. Error | FID |\n|----------|-----------|------------|-------|\n| CSVAE | 0.26 | 227 | 86.14 |\n| Semi-VAE | 0.23 | 239 | 86.05 |\n| PCVAE | 0.23 | 222 | 85.45 |\n| CorrVAE | 0.22 | 229 | 85.17 |\n\nThe relevant discussion is also added in line 339-342, as shown below:\n\n“According to Appendix Table 4 and Appendix Table 5, both molecule and image data are generated well by the proposed model since CorrVAE achieves 100\\% validity and novelty on molecular generation and comparable reconstruction error and FID values on image generation”\n\\\n\\\n(4) In addition, we also qualitatively evaluate the shape of generated images by visualizing the whole batch of images generated in Appendix Figure 21, based on each property constraint from Figure 4 . The discussion about Appendix Figure 2 is added in line 385-387, as shown below:\n\\\n\\\n“We also showcased the whole batch (eight) of generated images in Appendix Figure 2 corresponding to each constraint of Figure= 4. All images for the same constraint look similar, indicating the consistency and the replicability of our model.”\n",
" Thank you very much for your detailed summarization and insightful comments. Please find our answers to your comments/questions below. We have updated our paper based on your suggestions. The summary of updates in the paper are listed in a separate comment on top of the webpage. If you have any further comments/suggestions on the updated version of our paper, we will be glad to improve on them. We also sincerely hope that the revised version and responses could help with an increase in the score.\n\n**A summary of updates based on comments from Reviewer2**:\\\n(1) We have added experiments to evaluate the quality of generated images using negative log likelihood, reconstruction error and FID values in Appendix Table 5.\\\n(2) We have visualized the generated images from CorrVAE in Appendix Figure 1.\\\n(3) We have conducted new experiments by controlling five properties: “shape”, “size”, “x position”, “y position” and “x+y position”. The corresponding mask matrix is shown in Figure 5.\\\n(4) We added experiments by traversing five latent variables in w that correspond to five properties, respectively, and the change of generated images are visualized in Figure 3.\\\n(5) We updated the constraints for multi-objective optimization by controlling “shape”. The generated images are updated and presented in Figure 4.\\\n(6) We showcased in Appendix Figure 2 a batch of images generated under four sets of constraints under the multi-objective optimization framework corresponding to Figure 4. Images generated under the same constraint look similar indicating the consistency and replicability of our model.\\\n(7) We also added corresponding discussions according above newly added contents.\\\n\n**Comment #1**: The experiments fail to sufficiently back the claims made in the paper. Crucially, the paper is framed as a data generation method but the proposed experimental protocol do not assess the quality of the generated data, only the preservation of a few attributes.\\\n**Response #1**: \\\nThanks for the comment. We have added the requested evaluation of the data generation performance in addition to attribute preservation in the revised paper, including:\\\n(1) For the task of molecule generation on QM9 and QAC datasets, we have added the evaluation on qualities of generated data are measured by validity, novelty and uniqueness, which follows the typical molecule generation works [1] [2], as shown below and in Appendix Table 4. \n\n| Method | QAC | | | QM9 | | |\n|----------|--------|---------|------------|--------|---------|------------|\n| | Validy | Novelty | Uniqueness | Validy | Novelty | Uniqueness |\n| Semi-VAE | 100% | 100% | 37.5% | 100% | 100% | 82.5% |\n| PCVAE | 100% | 100% | 89.2% | 100% | 99.6% | 92.2% |\n| CorrVAE | 100% | 100% | 44.5% | 100% | 91.2% | 23.8% |\n\n(2) For the task of image generation, we have additionally visualized the generated data in Appendix Figure 1. In addition, we added the results regarding FID, reconstruction error and negative log likelihood to evaluate the quality of generated images, which follows the typical image generation works [3] [4]. The results are added in Appendix Table 5. We also have added the relevant discussion in line 339-342, as shown below:\n\n“According to Appendix Table 4 and Appendix Table 5, both molecule and image data are generated well by the proposed model since CorrVAE achieves 100% validity and novelty on molecular generation and comparable reconstruction error and FID values on image generation”\n\nThe CorrVAE has superior performance in both negative log likelihood and FID values whereas CorrVAE achieves comparable reconstruction error with CSVAE and PCVAE but performs better than Semi-VAE.\n\n| Method | -log Prob | Rec. Error | FID |\n|----------|-----------|------------|-------|\n| CSVAE | 0.26 | 227 | 86.14 |\n| Semi-VAE | 0.23 | 239 | 86.05 |\n| PCVAE | 0.23 | 222 | 85.45 |\n| CorrVAE | 0.22 | 229 | 85.17 |\n\n\n\\\n\\\n[1] Samanta, Bidisha, et al. \"Nevae: A deep generative model for molecular graphs.\" Journal of machine learning research. 2020 Apr; 21 (114): 1-33 (2020).\\\n[2] Luo, Youzhi, Keqiang Yan, and Shuiwang Ji. \"Graphdf: A discrete flow model for molecular graph generation.\" International Conference on Machine Learning. PMLR, 2021.\\\n[3] Ak, Kenan E., et al. \"Incorporating reinforced adversarial learning in autoregressive image generation.\" European Conference on Computer Vision. Springer, Cham, 2020.\\\n[4] Razavi, Ali, Aaron Van den Oord, and Oriol Vinyals. \"Generating diverse high-fidelity images with vq-vae-2.\" Advances in neural information processing systems 32 (2019).\n",
" **Comment #6**: The proposed idea is very interesting, however, for property-controlled generation, each time it needs to solve the optimization problem to find the proper w for the target property, this could be expensive, also the paper did not compare with the approaches that do bayesian optimization in the latent to find a best patent variable that can generate the data with the target properties.\n\n**Response #6**: \n* The basic time expense is incurred by the nature of the correlated-property-controllable data generation problem instead of our method. Specifically, correlated property control is a multiobjective optimization problem where different objectives typically have conflicts and hence an optimization is incurred.\n* Our method does not need to optimize each time. Specifically, once the targeted property is specified, then we will execute the multi-objective optimization to identify the w values. Then we can just randomly sample the z variables' values, together with the identified w, to generate as many as possible new data objects without the need for optimization. This is one of the nice advantages of our method.\n* We are working on adding one more variant of CorrVAE (CorrVAE-3) based on Bayesian optimization to compare with other comparison models. We will update our results in the paper within the discussion period due to the time limit of the author response period.\n* The framework based on Bayesian optimization cannot capture the correlation among properties so that it lacks interpretability compared with ours which is able to capture and indicate correlation among properties using the learned mask matrix (Figure 5).",
" We appreciate the reviewer's valuable comments and time in helping us improve the manuscript. We reply to each of the points raised below:\n\n**Comment #1**: Corresponding properties in figure3 and 4 are not readable.\n\n**Response #1**: Thank you for the valuable suggestion. We have increased the font size of the annotation in Figure 3 and Figure 4 in the revised paper.\n\n**Comment #2**: It is not very clear to me how this conditional property is simplified: $q(w,z|x,y) = q(w,z|x)$\n\n**Response #2**: This equation is satisfied given the assumption that the information of the y (data properties) is included in $x$ (data) and $y$ can be extracted from $x$, for example, $y=f(x)$. As a result, $q(w,z|x,y)=q(w, z|x, f(x))=q(w,z|x)$. We have clarified this in Appendix Section A line 29\n\n**Comment #3**: I am not sure why we need the J matrix in $w'=wJ^TM$ , is not the masking matrix $M$ learn if an element of w contribute to $w'$ or not and how big the contribution is?\n\n**Response #3**: Thank you for the opportunity for us to clarify. “$w’=$” is a typo here and has been removed from our original paper (line 175). We use $wJ^TM$ to generate the matrix that formalizes how values in w can be aggregated to $w’$ via the masking matrix $M$ and the aggregation function $h()$ (Eq. 5). Each column of the resulting matrix corresponds to the related latent variables in w to be aggregated to predict the corresponding $y$. The vector $J$ is a vector with all values as one thus does not need to be learned. We have added the clarification in line 178-183.\n\n**Comment #4**: I did not really understand assumption 2 where they derive x and y are independent given z and w.\n\n**Response #4**: Thank you for the question and giving us the opportunity to clarify this. Given $x\\perp y\\vert w$, $y\\perp z, z\\perp w$, we have $y\\perp z\\vert w$ and $x\\perp y\\vert (w, z)$. Based on the Bayesian rule, we have $p(x, y\\vert w, z) = p(y\\vert x, z ,w)p(x\\vert z, w) = p(x\\vert y, z, w)p(y\\vert z, w)$. This equation can be canceled as $p(y\\vert w)p(x\\vert z, w) = p(x\\vert y, z, w)p(y\\vert w)$ given $y\\perp z\\vert w$ and $y\\perp x\\vert w$. Then we have $p(x\\vert z, w)=p(x\\vert y, z, w)$, indicating that $x\\perp y\\vert (w, z)$. We have also added the proof to Appendix line12 - 21.\n\n**Comment #5**: In equation (2), from the first line to the next where they assume y_i can be learned from w'_i, where w_is are independent, is this assumption always holds? so each w'_i is independent, each y_i is derived from one w'_i, but y_i are correlated. So I am a bit confused that if each y_i is only derived from one w'_i, and w'_is are independent, how can they produce a set of y_i that are correlated? This leads to my next question the learned latent w itself stated that elements in w are disentangled from each other, so is the intension behind why we introduce another set $w'$ that are also disentangled.\n\n**Response #5**: \n* Yes, the assumption always holds in our setting that as long as any independent variables in $w$ contribute to a certain property $y_i$ via $w_i'$, its information will be deterministically aggregated to $w_i'$ via Eq. 5 and Eq. 6. Each $w_i'$ is not independent but the set of values of $w$ that $w_i'$ aggregates are independent (line 151-154). The Eq. 2 holds since properties y are independent conditional on $w’$ and $w’$ aggregates all information from $w: log p(y\\vert w)=p(y\\vert w’)=\\sum_{i=1}^{m}p(y_i\\vert w’)=\\sum_{i=1}^{m}p(y_i\\vert w_i')$. We have added this detailed derivation in Appendix Section A (line 25-28).\n* $W_i’$ are not independent (not disentangled) with each other but variables in w are independent and disentangled with each other. If $y_i$ and $y_j$ are correlated then $w_i’$ is also correlated with $w_j’$ but they share common variables in w. For instance, as shown in Figure 5, w4 is shared by both “x position” and “x+y position” so that we can control those two variables via w4. The $w’$ that predicts “x+y position” aggregates information from w3, w4 and w5 (according to horizontal vector of “x+y position”).\n",
" We sincerely appreciate for reviewers' comments and feedbacks that made our paper further improved when we were addressing their concerns. We are also glad that reviewers approved our clarifications and are satisfied with how we addressed their comments. The following provides a final summary of the updates:\n\n(1) We have added experiments to evaluate the quality of generated images using negative log likelihood, reconstruction error and FID values in Appendix Table 5.\\\n(2) We have visualized the generated images from CorrVAE in Appendix Figure 1.\\\n(3) We have conducted new experiments by controlling five properties: “shape”, “size”, “x position”, “y position” and “x+y position”. The corresponding mask matrix is shown in Figure 5.\\\n(4) We added experiments by traversing five latent variables in w that correspond to five properties, respectively, and the change of generated images are visualized in Figure 3.\\\n(5) We updated the constraints for multi-objective optimization by controlling “shape”. The generated images are updated and presented in Figure 4.\\\n(6) We showcased in Appendix Figure 2 a batch of images generated under four sets of constraints under the multi-objective optimization framework corresponding to Figure 4. Images generated under the same constraint look similar indicating the consistency and replicability of our model.\\\n(7) We also added corresponding discussions according above newly added contents. The location of these discussion will be identified in the response to reviewers.\\\n(8) All old figures that control three properties (i.g., “size”, “x position” and “x+y position”) are moved to the Appendix: Old Figure 3 is moved to Appendix Figure 3. Old Figure 4 is moved to Appendix Figure 5. Old Figure 5 is moved to Appendix Figure 4.\\\n(9) We added results that compare CorrVAE and the Bayesian optimization-based model (BO) in Appendix Table 8. We also added relevant discussions in Appendix line 83-95.",
" Generating data with multiple constraints on its correlated properties is a critical task. The authors address this task by designing a new mask pooling layer to identify and control correlated properties using independent latent variables. These latent variables are bond to properties based on the mutual dependence. Then these latent variables are optimized to generate data with desired properties under a multi-objective optimization framework. The effectiveness of the proposed model has been shown on the molecule and image datasets in the evaluation session. Strengths:\n1. The tasks approached by the paper are challenging but practically important. The framework proposed by the authors is novel.\n1.1 The proposed framework includes a mask pooling layer learned to capture and control correlated properties via the independent latent variables. This also to some extent adds interpretability to the model.\n1.2 In addition, the framework employs a set of bridging latent variables w to aggregate information from w to predict properties. The exact recovery of w from properties can be achieved via an invertible constraint of mutual dependence.\n1.3 The authors formally propose a multi-objective optimization framework for simultaneously control correlated properties of generated data. It comes with the generality in terms of accommodating different optimization goals and constraints, which looks reasonable and interesting for controllable data generation.\n\n2. The experiments show the effectiveness of the model. \n2.1.\tThe mask pooling layer works well given the results shown in Figure 5 under the setting that x position and x+y position should be correlated with each other. \n2.2. Figure 3 shows the effect of disentanglement of the latent variable w. Correlated properties x+y position can be well controlled by the corresponding latent variable given by the results of mask pooling layer.\n2.3. As shown in Table 1, CorrVAE can handle correlated properties x+y position better than other comparison models on dSprites dataset. Table 2 shows the effectiveness of CorrVAE on molecule datasets on MolWeight and logP properties.\n\n3 The code of the paper is well packaged and published.\n\nWeakness:\n1. In Table 1, although the proposed method dominating the others, it would be great to provide more analysis about the results why sometimes the degree of superiority is very large while sometimes its performance is close to the others.\n2. The white text on Figure 3 and Figure 4 can be larger to be more readable. The fond size of Table 1 and Table 2 should be aligned.\n3. Missing references: For example [1] below can be discussed in the Related works session.\n\n[1] Xie, Yutong, et al. \"Mars: Markov molecular sampling for multi-objective drug discovery.\" arXiv preprint arXiv:2103.10432 (2021).\n 1. Provide more discussion about the results in Table 1. Why CorrVAE has worse performance than CSVAE on two independent variables x position and scale?\n2. Discuss the missing reference [1] in the Related works session.\n The paper discusses the limitation when pointing out the potential future works in the Conclusion. \nThat the work does not have potential negative social impact.",
" The paper tackles the problem of controllable data generation when the controlled attributes y are correlated.\nTo do so, it proposes to separate the latent code in a disentangled VAE into two parts, z and w, where w contains the variables that are correlated to y, and z the variables independent of w (and hence to y). The disentangled variables w are combined into correlated variables w' via a learned mask M that indicates for each value of w', which values of w contribute to it. The values of w' are mapped one-to-one to the attributes y using an invertible network.\nThe model is used for controlled generation, to identify correlations between attributes, and for generation using multi-objective constrained optimization. Strengths:\n- The paper address the problem of attributes correlation that is often ignored in controlled generation in a principled way.\n- Experiments have been conducted on real molecular data, in addition to synthetic images.\n\nWeaknesses:\n- The experiments fail to sufficiently back the claims made in the paper. Crucially, the paper is framed as a data generation method but the proposed experimental protocol do not assess the quality of the generated data, only the perservation of a few attributes. - The samples shown in Fig3 and Fig4 for dShapes are very bad, even in disentanglement VAE models standard. The shape attribute, that should be encoded in the independent z variables, are not only not conserved with attribute manipulation, but also they do not seem to be shapes that are in the dataset. This drastically limits the usefulness and significance of the method for data generation. Can the authors provide reconstruction errors and FID for generated data, for both seen and unseen combination of attributes?\n- The authors discuss in section 5.4 and show in Fig 3 latent traversals of w. However, it would appear that the w space is not that interesting, as they do not align well with w' and y. For instance, both the \"x position\" and \"x+y position\" attributes are diluted into two variables (figure 5). This is especially surprising since it increases the KL term in Eq3, and the mask sparcity loss. Meanwhile, the arguably more interesting variable w' that would be used for attribute manipulation is barely investigated. Can the authors provide qualitative samples for latent traversal of w', and quantitative figures on how well the attributes are retained?\n- I would also like to see the full mask M in Figure 5. What about \"y position\" for instance?\n- While the authors claim that the ablation CorrVAE-1 that use ground truth masks are achieving better performance than CorrVAE, it is not clear in the result Table 1. This could also indicate that the model is not working as expected and that different variables in the model do not exactly capture the information they are intended to get. Why would CorrVAE-1 would be slightly worse than CorrVAE on some task? How significant is the difference?\n- Important implementation details are in Appendix C. At the very least, the fact that the mask is encouraged to be sparse should be mentioned in the main paper. I would argue that approximations and relaxation of the problem, such as Monte-Carlo, Gumbel-SoftMax and Spectral Normalization should also be mentioned in the main paper when used as they are not exact implementations of the provided formula.\n\nOverall, I believe the paper overclaims what is the proposed model able to do. It indeed preserve the controlled attributes, arguably better than the baselines, but seemingly at the cost of loosing the other attributes, even some as fundamental as shape or orientation.\nI could change my opinion if the authors can provide evidence that this assessment is wrong (by answering the questions for instance), or if they can show that the trade-off is a desirable feature.\n\n\nMinors typos: \n\nl199: thrid\n\nl183 w_i^T · w_j : w_i \n\nl326-327: it seem it should be CorrVAE-2 insteand of CorrVAE-1?\n\nVertical spacing after subsection titles are very unusual. The paper do not mention its limitations.\nI believe the weaknesses raised above do qualify as limitations in term of data generation and should be adressed in the paper.",
" The paper tries to address conditional generation where conditioning properties are correlated.\nThey point out that existing works that tackle the controllable property generation do not consider the correlation between multiple properties, as it is difficult to identify property correlation. Therefore, they propose a deep generative model that uses invertible mapping to map correlated properties to latent independent variables. During learning, they learn to encode the property and other information through two different encoders and use a novel mask pooling layer on the latent representation corresponding to the property encoder to learn the correlated properties. \n\n\n strength: The problem they try to solve is beneficial for the community and the proposed method is interesting\n\nweakness: the presentation of the paper could be improved, some parts are bit confusing. They did not compare with the model where they explicitly model the dependency between the properties using hirerichical model or with those who do unconditional generative model coupled with Bayesian optimization to do conditional generation. \n\nThe proposed model seems really interesting but I had some confusion:\n\n1. corresponding properties in figure3 and 4 are not readable. \n2. It is not very clear to me how this conditional property is simplified: q(w,z|x,y) = q(w,z|x)\n3. I am not sure why we need the J matrix in w'=wJ^TM , is not the masking matrix M learn if an element of w contribute to w' or not and how big the contribution is?\n4. I did not really understand assumption 2 where they derive x and y are independent given z and w.\n5. In equation (2), from the first line to the next where they assume y_i can be learned from w'_i, where w_is are independent, is this assumption always holds? so each w'_i is independent, each y_i is derived from one w'_i, but y_i are correlated. So I am a bit confused that if each y_i is only derived from one w'_i, and w'_is are independent, how can they produce a set of y_i that are correlated? This leads to my next question the learned latent w itself stated that elements in w are disentangled from each other, so what is the intension behind why we introduce another set w' that are also disentangled. \n The proposed idea is very interesting, however, for property-controlled generation, each time it needs to solve the optimization problem to find the proper w for the target property, this could be expensive, also the paper did not compare with the approaches that do bayesian optimization in the latent to find a best patent variable that can generate the data with the target properties. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
2
] | [
"MV4Wb8WcfezO",
"vUIthqPDPTj",
"ZS6cvsv2QMi",
"JfblpkrDxk",
"i4BDPd-kdgP",
"E64hFLLC5eP9",
"m3Mgv9w3Xtt",
"GU0VImi3IEs",
"smBe3JwDx2b",
"smBe3JwDx2b",
"smBe3JwDx2b",
"smBe3JwDx2b",
"smBe3JwDx2b",
"veiF-WnHrWP",
"GU0VImi3IEs",
"nips_2022_3uj_8G7fxgs",
"nips_2022_3uj_8G7fxgs",
"nips_2022_3uj_8G7fxgs",
"nips_2022_3uj_8G7fxgs"
] |
nips_2022_lgNGDjWRTo- | Deep Generative Model for Periodic Graphs | Periodic graphs are graphs consisting of repetitive local structures, such as crystal nets and polygon mesh. Their generative modeling has great potential in real-world applications such as material design and graphics synthesis. Classical models either rely on domain-specific predefined generation principles (e.g., in crystal net design), or follow geometry-based prescribed rules. Recently, deep generative models have shown great promise in automatically generating general graphs. However, their advancement into periodic graphs has not been well explored due to several key challenges in 1) maintaining graph periodicity; 2) disentangling local and global patterns; and 3) efficiency in learning repetitive patterns. To address them, this paper proposes Periodical-Graph Disentangled Variational Auto-encoder (PGD-VAE), a new deep generative model for periodic graphs that can automatically learn, disentangle, and generate local and global graph patterns. Specifically, we develop a new periodic graph encoder consisting of global-pattern encoder and local-pattern encoder that ensures to disentangle the representation into global and local semantics. We then propose a new periodic graph decoder consisting of local structure decoder, neighborhood decoder, and global structure decoder, as well as the assembler of their outputs that guarantees periodicity. Moreover, we design a new model learning objective that helps ensure the invariance of local-semantic representations for the graphs with the same local structure. Comprehensive experimental evaluations have been conducted to demonstrate the effectiveness of the proposed method. | Accept | This paper proposed an interesting model for generating periodic graphs. The hierarchical representation of periodic graphs decomposes into local structure and global structure, and greatly reduces the size of the structure to be modeled, which is an interesting contribution. All reviewers liked the idea of this representation and lean toward acceptance.
I would, however, encourage the authors to include some discussion of the limitations of this approach, in particular the dependence on a stable node ordering, and the fact that the model can only generate perfectly periodic graphs (what if the graph is mostly periodic but with some noise?). | train | [
"jjaMyjWQ6dr",
"WE3ycH9mL5",
"yYujJgMMkbX",
"X-fjVX-ucqA",
"fOzYVbU0WV",
"NUJKYywLdl",
"YsfKj3yNxLw",
"uh7UPxbEDWy",
"n__G6ZvxzQB",
"exo0JpbvG_5",
"F3N1QrX4p9S",
"iFo3zb3y7dR",
"anbOKfRDUbi",
"keqGZfvVTeX",
"EtEJwQ9xCNM",
"M1lt_ADs2UVP",
"tSRw8S7UH8b",
"KRQFV49EHnO",
"SnyJmnu75DC",
"q5Fs1EMrkF"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" My comments have been properly addressed.",
" We sincerely appreciate for reviewers' comments and feedbacks that made our paper further improved when we were addressing their concerns. We are also glad that reviewers approved our clarifications and are satisfied with how we addressed their comments. The following provides a final summary of the updates:\n\n1. We have addressed all typos in the revised paper.\n2. We have discussed the paper \"Crystal diffusion variational autoencoder for periodic material generation\" in line 129-130 as suggested by reviewer 2.\n3. We have discussed the paper \"SPECTRE: Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators.\" in line 116-117 as suggested by reviewer 4.\n4. We have showcased the application of our model by predicting atom types using QMOF dataset and presented the results in Appendix Section G.\n5. We have conducted additional experiments by adding GraphVAE as the comparison model and the results are shown in Table 2 and Table 3.\n6. We have conduced additional experiments to show the stability of BFS ordering of nodes and relavent discussions in Appendix Section H.\n",
" Thank you for your precious time and suggestions for us to improve our work!\n\nWe conduct additional experiments to show the application of our model by adding a prediction function (modeled by MLP) to predict atom types of MOFs using QMOF dataset. Note that periodic graphs contain repeated basic units so that we only need to predict atom types for one basic unit and assign them to other basic units. The results have been added to Appendix Section G.",
" Thank you for your precious time and patience!\n\nWe conduct additional experiments to show one potential application of our model by adding a prediction function (modeled by MLP) to predict atom types of MOFs using QMOF dataset. Note that periodic graphs contain repeated basic units so that we only need to predict atom types for one basic unit and assign them to other basic units. The results have been added to Appendix Section G.",
" Thank you. They seem not perfectly stable, but as you say not terrible. As promised I increased my score.",
" We sincerely thank the reviewer for giving us the opportunity to clarify this.\n\nTo order the nodes in $A^{(l)}$ and $A^{(g)}$, we apply BFS-based-ordering, which is commonly employed by existing works for graph generation such as GraphRNN [1] and GRAN [2]. Here the rooted node is selected as the node with the largest node degree in the graph. Then BFS starts from the rooted node and visits neighbors in a node-degree-descending order. Such a BFS-based-ordering works well in terms of stability, which is similar to the situation in GraphRNN [1] and GRAN [2].\n\n(1) Suppose the graph has $n$ nodes in total. The number of possible node orderings of a graph is $n!$, but in practice BFS-based-ordering typically results in much smaller number of orderings, as explained in Section 2.3.4 in [1]. This means BFS typically leads to a much smaller number of node orderings than that led by random-ordering, which is what existing works observed in the experiments in real-world graphs [1].\n\n(2) We conducted additional experiments to demonstrate the stability of BFS-based orderings comparing to random-orderings that acts as a baseline. We randomly pick up 100 graphs from the QMOF dataset. For each graph $g_i$ (i=1,2,...,100), we randomly permute its nodes for 100 times and obtain 100 randomly permuted graphs $G$=\\{$g_1, g_2, …, g_{100}$\\}. (Hence in total we have 10,000 graphs). For each of these graphs, we apply the BFS as described above to order their nodes and obtain the corresponding BFS-ordered graphs $G’$=\\{$g_1’, g_2’, …, g_{100}’$\\}. Then we calculate Kendall rank correlation coefficient and Spearman's Rank Correlation coefficient along with relative p-values for each pair of graphs in G’. These two coefficients are commonly-used metrics for measuring similarity between orderings. Then, by comparison, for each graph in G we randomly order its nodes and obtain the corresponding randomly-ordered graphs $G’’$=\\{$g_1’’, g_2’’, …, g_{100}’’$\\}. We also calculate Kendall rank correlation coefficient and Spearman's Rank Correlation Coefficient along with relative p-values for each pair of graphs in G’’. The results regarding the average Kendall rank correlation coefficient, Spearman's Rank Correlation Coefficient, and p-values on both BFS-ordered graphs and randomly ordered graphs are shown below.\n\n| Data | Spearman's Rank Correlation | | Kendall rank correlation | |\n|-------------------------------|-----------------------------|---------|--------------------------|---------|\n| | Coefficient | p-value | Coefficient | p-value |\n| BFS-ordered graphs (G’) | **0.6734** | **0.0114** | **0.5742** | **0.0106** |\n| Randomly-ordered graphs (G’’) | 0.1243 | 0.5040 | 0.0852 | 0.5009 |\n\nBased on the results in the table, BFS-based-ordering achieves a Spearman's Rank correlation coefficient of 0.6734 which is much larger than the value of 0.1243 calculated from Randomly ordering. This indicates that although the input graphs come with very different node permutation, the BFS-based-ordering method can transfer them into very similar node orderings, which reveals the stability in BFS-based node ordering. Moreover, the low p-value of 0.0114 indicates that such stability is statistically significant. Similarly, BFS-based-ordering achieve a Kendall rank correlation coefficient of 0.5742 which is much larger than the value of 0.0852 calculated from Randomly ordering. This again indicates that although the input graphs come with very different node permutation, the BFS-based-ordering method can transfer them into very similar node orderings, which reveals the stability in BFS-based node ordering. Moreover, the low p-value of 0.0106 again indicates that such stability is statistically significant.\n\nWe will add those results and discussions in our revised paper.\n\n[1] You, Jiaxuan, et al. \"Graphrnn: Generating realistic graphs with deep auto-regressive models.\" International conference on machine learning. PMLR, 2018.\n\n[2] Thomas N Kipf and Max Welling. 2016. Variational graph auto-encoders. NeurIPS’2016 Workshop (2016). [3] Liao, Renjie, et al. \"Efficient graph generation with graph recurrent attention networks.\" Advances in neural information processing systems 32 (2019).",
" Thank you for the detailed answers and improvements made to the paper. The lack of permutation equivariance still bugs me. That being said I have increased my score and if the BFS ordering procedure proves to be stable, I'll increase it one more point.\n\nAnother note is that while the orbital counts are more informative than than clustering coefficient or degree counts it's still somewhat local (spectrum is better in this regard as it's truly global).",
" Thank you for the explanation. How is the root node for the BFS selected? This presumably impacts the final ordering of the graph.",
" **Comment #8**: What ordering do the GraphRNN and GRAN use? The default BFS/DFS orderings that were originally used for these models or do you also provide them with the true graph ordering you use for your model?\\\n**Response #8**:\\\nIn our work, we handle the permutation invariance by defining a canonical order of nodes. We can easily obtain the adjacency matrix ($A^{(l)}$) of a basic unit (i.e., unit cell for QMOF, two nodes linked by an edge for synthetic data and MeshSeg). Then these adjacency matrices of the single basic unit are put on the diagonal of the adjacency matrix A. The order of these basic units on the diagonal of A is determined by BFS. In addition, $A^{(n)}$ is fixed due to the periodicity of the graph. Once $A^{(l)}$, $A^{(n)}$ and the order of the basic units are fixed, then $A^{(g)}$ is fixed. A can be deterministically formed via $A^{(l)}$, $A^{(n)}$ and $A^{(g)}$. As a result, we can have a canonical order of periodic graph.\n\n**Comment #9**: In Figure 3, are the trees in the lower left corner part of the dataset? It is unclear which dataset this figure comes from. I presume it to be the synthetic one, but it does not have trees. If it is the synthetic dataset, it seems strange trees are generated even though all base units in the dataset where polygons.\\\n**Response #9**:\\\n(a) This figure comes from the model trained on the Synthetic dataset.\n(b) As shown in Figure 1, the basic units learned by our model are not adjacent with each other (i.e., share common nodes or edges). For instance, the basic unit for the synthetic data is just two nodes connected by an edge (line segment), indicating that the local pattern is well learned by our model. The branches shown in Figure 3 are just those basic units (line segments). The shape that they can form, such as triangle, square or polygon, is determined by $A^{(n)}$ learned by the neighborhood decoder. ",
" Thank you very much for your detailed summarization and insightful comments. Please find our answers to your comments/questions below. We have updated our paper based on your suggestions. If you have any further comments/suggestions on the updated version of our paper, we will be glad to improve on them. We also sincerely hope that the revised version and responses could help with an increase in the score.\n\n**Comment #1**: Some typos: for example, in Figure 2, it should be A^{(l)}, A^{(g)} and A^{(n)} rather than A_l, A_g and A_n.\\\n**Response #1**:\\\nThank you for pointing this out. We have addressed those typos in the revised paper.\n\n**Comment #2**: Missing references: for example, [1] below aims to generate periodic structures, although seems using different strategies and different type of input data.\\\n**Response #2**:\\\nThank you for guiding us to this paper. Indeed this one focus on the generation of periodic graphs and we will surely add it to our paper.",
" Thank you very much for your detailed summarization and insightful comments. Please find our answers to your comments/questions below. We have updated our paper based on your suggestions. If you have any further comments/suggestions on the updated version of our paper, we will be glad to improve on them. We also sincerely hope that the revised version and responses could help with an increase in the score.",
" **Comment #4**: Why only two graph similarity measures (density and clustering coefficient) are used? Even the baselines that were used such as GRAN used more advanced measures, such as orbital counts or spectral features. Using only density and clustering coefficient are determined by local features, so there are no measures that show how well the global graph structure is reproduced. Which is one of the main goals of the work.\\\n**Response #4**: We have added the metrics KLD of orbital between generated graphs and graphs in the training set in Table 2.\\\n**Comment #5**: I'm also confused why the GraphVAE baseline was not used. It certainly has the closest overall architecture (GNN encoder + MLP decoder). Authors argue that it is too expensive, but the expense comes from the graph matching used to compute a permutation invariant loss. But authors claim, that for the graphs they consider we can assume the node order to fixed as otherwise their proposed method does not work. Also, GRAN used GraphVAE as a baseline on datasets with graphs as large as the ones considered in this paper (Protein graphs used in GRAN paper had up to 500 nodes). In GRAN paper they also just discarded GraphVAE matching loss and used the node ordering as given to GRAN model itself, which worked fine.\\\n**Response #5**:\\\nThank you for the valuable suggestions of the review. We have added GraphVAE as the comparison model and the results are shown in Table 2 and Table 3.\\\n**Comment #6**: Writing quality could certainly be improved. The text is wordy, there's a lot of typos, sometimes words are missing in the sentences, even multiple words per sentence (e.g. line 125). I would strongly suggest the authors to at least use some sort of a spelling and grammar correction software (e.g. Grammarly).\\\n**Response #6**:\\\nThank you for the valuable comments, we have proofread the paper again and corrected all the typos in the paper.\\\n**Comment #7**: How exactly is m an n selected for each dataset? As I understood it, you use analytic methods (domain knowledge) to determine the optimal substructure sizes. This already seems to give a lot of ground-truth information for the model.\\\n**Response #7**:\\\nThe values of m (i.e., a safe estimate of largest number of basic units) and n (i.e., a safe estimate of largest number of nodes in each basic unit) are set as sufficiently-large values that are estimated following commonly-used manners in graph generation methods such as GraphVAE [1], VGAE [2], and GRAN [3]. In practice, the de facto estimation is readily achieved by the domain common sense. For example, to generate small molecules, GraphVAE estimates the largest number of nodes as 9 when applying it to QM9, a dataset of small molecules. GRAN estimates the largest number of nodes as 500 when applying it to Protein, a dataset of protein graphs where nodes are amino acids. Similarly, our m is set as 6 and n is set as 60 when being used in dataset MeshSeq, where there are less than dozens of number of nodes for each basic unit. Our m is set as 12 and n is set as 30 when being used in QMOF dataset which contains relatively larger basic units. We do NOT need analytic methods (domain knowledge) to determine the optimal substructure sizes. Moreover, the information of the number of nodes in each basic unit (and even the substructure) are commonly available in many periodic graph datasets and crystal structure datasets such as QMOF, OQMD, and Matbench. Hence it is even more convenient for us to use them when they are available.\n\n[1] Simonovsky, Martin, et al. \"Graphvae: Towards generation of small graphs using variational autoencoders.\" International conference on artificial neural networks. Springer, Cham, 2018.\\\n[2] Thomas N Kipf and Max Welling. 2016. Variational graph auto-encoders. NeurIPS’2016 Workshop (2016).\n[3] Liao, Renjie, et al. \"Efficient graph generation with graph recurrent attention networks.\" Advances in neural information processing systems 32 (2019).",
" **Comments #1**: Recently, a more general approach to decouple local and global structure generation, not focused on periodic graphs has been proposed [1]. I think this should be mentioned in the related work. I haven't seen other work that focuses in particular on periodic graph generation using deep learning. Which makes this particular problem quite novel and potentially useful in problems such as crystal structure design.\\\n**Response #1**:\\\nThank you for providing this information and we have surely added and discussed this paper in our related works section line 116-117.\n\n**Comments #2**: What I find problematic, is the assumption that the node order in the graph is known and can be consistently determined. First, it limits us to a small set of problems where we can even hope to determine such an ordering (e.g. atom lattices). Further, even for the problems considered here it's not entirely clear to me that we can determine such an ordering. ...... In context of this an important reference [2] discusses, that for graph generation, you need a permutation invariant loss function. Which we do not have in the approach proposed here. I also think this reference should be included in the paper. The second problem I see with assuming that we can get this canonical graph structure (ordering) is that in this case we are essentially just dealing with a set of matrices. So we can just use a traditional and very powerful matrix processing models such as convolutional neural networks. In fact the proposed architecture is a bit like a CNN already as it considers a bunch of patches and how they combine/overlap. So under this assumption I'm not even sure we need a GNN-based architecture.\\\n**Response #2**:\\\n(1) For now the proposed model borrows the same strategy as VGAE [1] to handle the order of nodes. Basically the graph is generated in a one-shot manner and without considering the graph matching. Suppose $A^{(l)}\\in\\mathbb{R}^{n\\times n}$ is the adjacency matrix of basici unit, $A^{(g)}\\in\\mathbb{R}^{m\\times m}$ is the adjacency matrix denoting the pairwise relations between basic units and $A^{(n)}\\in\\mathbb{R}^{n\\times n}$ is the incidence matrix regarding how the nodes in adjacent basic units are connected. Even if we employ the graph matching algorithm in GraphVAE to solve the permutation invariance issue we can still achieve a better model complexity as we only need to match the order of nodes in $A^{(l)}$ and $A^{(n)}$. This corresponds to the time complexity of $O(m^4+n^4)$ which is much smaller than $O((mn)^4)$ beared by GraphVAE.\\\n(2) GNN (e.g., GCN or GIN) is better to capture the local information of the graph since it captures the local information by aggregating the information of neighbors of each node in the graph. By contrast, CNN considers the adjacency matrix as a bunch of patches rather than a topological structure of the graph so that it basically just counts how many edges there are in a patch which might consist of many nodes without providing any topological information of the graph.\\\n**Comment #3**: A small note on the Global pattern encoder - it's true that using GIN and global pooling gives you 1-WL representation power, but 1-WL is inherently a local procedure, so I'm not sure that's the best architecture to capture the global graph structure. Models that usually perform better on graph-level tasks could be considered, such as the more expressive GNN architectures or graph transformers, which do not suffer from having only local interactions.\\\n**Response #3**:\\\n(a) Thank you for the valuation comments. The proposed model is a general framework in that its encoder and decoder of the graph can be replaced with any alternatively depending on the needs. GIN, GCN and graph transformers are three options here as the encoder. \n(b) Although currently we use GIN as the encoder of our framework (PGD-VAE), we have achieved superior performance compared with other comparison models such as GraphRNN, GRAN and VGAE. As shown in Table 2, PGD-VAE achieves KLD smaller than comparison models by 0.8191 for clustering coefficient and 0.5062 for graph density on average on MeshSeq dataset, and 2.4275 for clustering coefficient as well as 1.7611 for graph density on average on Synthetic dataset. PGD-VAE also achieves better quality of graph generation regarding the uniqueness than GraphRNN and GRAN as shown in Table 3.\\\n\n[1] Kipf, Thomas N., and Max Welling. \"Variational graph auto-encoders.\" arXiv preprint arXiv:1611.07308 (2016).\n",
" My concern about the contrastive loss is addressed. Therefore, I will increase my score.",
" **Comment #1**: According to the authors, this paper's main idea of the generation process is to make the generation of local structures and global repeating patterns independent. But they introduce a contrastive loss to make the generation of local structures actually conditioned on the global repeating patterns.\\\n**Response #1**:\\\n(1) We don’t have the term “global repeating patterns” in our paper. We only have $z_g$ that captures global information of the graph and $A^{(g)}$ that denotes the pairwise relations among basic units. None of them are related to the calculation of the contrastive loss.\\\n(2) The local structures are not conditioned on the global one. The contrastive loss is computed by only enforcing the representation of local structures ($z_l$) equal for graphs with the same basic unit and different for those with different basic units; as shown in the Equation of contrastive loss below, it has nothing to do with the global $z_g$.\\\n**Comment #2**: In this paper, the generation of node type such as atom type in QMOF is ignored. Only the structure is generated.\\\n**Response #2**: \\\nAlthough generating node attribute is not the focus of our paper, we are still working on to add experiments to predict atom types of generated MOF simultaneously when generating the graph topology and will update our results shortly.\\\n**Comment #3**: Permutation invariance is ignored. The proposed method can not satisfy permutation invariance when the node type needs to be considered. The application of the proposed method is limited because of breaking permutation invariance.\\\n**Response #3**\\\nFor now the proposed model borrows the same strategy as VGAE [1] to handle the order of nodes. Basically the graph is generated in a one-manner and without considering the graph matching. Suppose $A^{(l)}\\in\\mathbb{R}^{n\\times n}$ is the adjacency matrix of basici unit, $A^{(g)}\\in\\mathbb{R}^{m\\times m}$ is the adjacency matrix denoting the pairwise relations between basic units and $A^{(n)}\\in\\mathbb{R}^{n\\times n}$ is the incidence matrix regarding how the nodes in adjacent basic units are connected. Even if we employ the graph matching algorithm in GraphVAE to solve the permutation invariance issue we can still achieve a better model complexity as we only need to match the order of nodes in $A^{(l)}$ and $A^{(n)}$. This corresponds to the time complexity of $O(m^4+n^4)$ which is much smaller than $O((mn)^4)$ beared by GraphVAE.\\\n**Comment #4**: Clarity: Hard to read. There are variables coming from nowhere and not shown in the equation, such as Equation (8). Many sentences hard to understand.\\\n**Response #4**: \\\nThe notations in Eq. (8) have been explained in Section 3.11, line 197 and line 203-204.\\\n**Comment #5**: If the goal is to make the generation of local structures and global repeating patterns independent, why do you introduce a contrastive loss to make the generation of local structures conditioned on the global repeating patterns?\\\n**Response #5**:\\\n(1) We don’t have the term “global repeating patterns” in our paper. We only have $z_g$ that captures global information of the graph and $A^{(g)}$ that denotes the pairwise relations among basic units. None of them are related to the calculation of the contrastive loss.\\\n(2) The local structures are not conditioned on the global one. The contrastive loss is computed by only enforcing the representation of local structures ($z_l$) equal for graphs with the same basic unit and different for those with different basic units; as shown in the Equation of contrastive loss below, it has nothing to do with the global $z_g$.\\\n\n[1] Kipf, Thomas N., and Max Welling. \"Variational graph auto-encoders.\" arXiv preprint arXiv:1611.07308 (2016).\n",
" **Comment #1**: Why not use the disentangled periodic graph representation as input to the PGD-VAE as well? \\\n**Response #1**:\\\nThe inputs of PGD-VAE, namely, $z_l$ and $z_g$, are disentangled periodic graph representations. Specifically, $z_l$ and $z_g$ are disentangled from each other. and the variables in $z_l$, which correspond to different basic units, are also disentangled from each other.\n\n**Comment #2**: More Ablation studies on the necessity of disentanglement should be added to validate the claim of the benefits of using disentangled generation. \\\n**Response #2**:\\\n(1) We have conducted the ablation study by removing the regularization term (contrastive loss) which is responsible to achieve the disentanglement of $z_l$ corresponding to different unit cells and the disentanglement between $z_l$ and $z_g$, as shown in Table 2.\\\n(2) We also have added the discussions in the paper:\\\n“The results in Table 2 shows that without disentanglement the model (PGD-VAE-1) achieves 0.4630 and 0.4921 of KLD on clustering coefficient and density for MeshSeq dataset, respectively, which is worse than those obtained from the full model (PGD-VAE) by 0.0653 and 0.0386. In addition, PGD-VAE-1 achieves 3.2672 and 0.8399 of KLD on clustering coefficient and density for Synthetic dataset, which is worse than the full model by 0.0809 and 0.1575 as well.”\n\n**Comment #3**: The generation results look good on the evaluation metrics. However, it is unknown how the generated graphs can be applicable to real-world applications. For instance, are the generated MOF structures stable and synthesizable?\\\n**Response #3**:\\\n(1) Generation of periodic graphs can be applied to a few real-world domains since periodic graphs naturally characterize many real-world applications ranging from crystal nets containing repetitive unit cells to polygon mesh data containing repetitive grids. We have discussed applications in line 29-31 and presented its forms in the real world Figure 1.\\\n(2) Although generating node attribute is not the focus of our paper, we are still working on to add experiments to predict atom types of generated MOF simultaneously when generating the graph topology and will update our results shortly.",
" This work proposed a novel VAE-based method to generate periodic graphs with applications to MOF, mesh and synthetic graphs generation. The proposed VAE-based method (PGD-VAE) disentangles the graph representation into local and global semantics by generating $A^{(l)}, A^{(g)}, A^{(n)} $ and then assembles them into a complete periodic graph. Overall, I enjoyed reading the paper. But there are some concerns to be addressed. Strengths:\n* The paper is well-written and easy to follow\n* To the best of my knowledge, the proposed PGD-VAE is novel in generating periodic graphs, and the proposed disentangled periodic graph representation is interesting.\n* The results are impressive on the evaluation metrics.\n\nWeaknesses:\n* Why not use the disentangled periodic graph representation as input to the PGD-VAE as well?\n* More Ablation studies on the necessity of disentanglement should be added to validate the claim of the benefits of using disentangled generation.\n* The generation results look good on the evaluation metrics. However, it is unknown how the generated graphs can be applicable to real-world applications. For instance, are the generated MOF structures stable and synthesizable? Please refer to the *weaknesses* for my questions. The limitation of this work is that the generated periodic graphs may not be applicable to real-world applications.",
" Generating novel periodic graphs is an interesting and important task as the application examples given in the paper. Three challenges towards this task are provided by the authors, which seem reasonable to me. To address these challenges, the authors propose a novel model to generate periodic graphs by automatically learning, disentangling, generating and assembling local and global patterns of periodic graphs. This idea of “hierarchically”generating periodic graphs is novel and it is suitable and efficient to handle this type of graphs as suggested by the complexity analysis part. In addition, the experimental results show the effectiveness of this model compared with other comparison models.\n Strengths:\n1 The framework proposed by the paper solves an important task to generate periodic graphs and is novel to me as it generates periodic graphs in a hierarchical fashion. This strategy looks natural since the periodic graph (e.g., graph structure of materials) can be very huge and is intuitively more efficient since it only needs to learn and generate local and global patterns. Also, to my understanding, the node embedding clustering module embedded in the local-pattern encoder can learn the local structure of the graph and perform at least as well as the sum pooling on the whole graph. \n2 The analysis on the model is comprehensive both theoretically and experimentally. The learning objective has been derived properly. The complexity of the model is better than the comparison models as shown in Table 1. The proposed model also has better performance than the comparison models.\n3 The presentation of the paper is good to me. Figure 2 is organized very well and is intuitive to show how the framework functions. The ability to disentangle local and global patterns has been demonstrated by Figure 3.\n4 The authors published the code.\n\n\nWeakness:\n1 Some typos: for example, in Figure 2, it should be A^{(l)}, A^{(g)} and A^{(n)} rather than A_l, A_g and A_n.\n2 Missing references: for example, [1] below aims to generate periodic structures, although seems using different strategies and different type of input data.\n \n[1] Xie, Tian, et al. \"Crystal diffusion variational autoencoder for periodic material generation.\" arXiv preprint arXiv:2110.06197 (2021).\n 1. The author can add and discuss the missing references, such as [1] in the Related Works session.\n2. Please check and fix the typos, e.g., that in Figure 2.\n The limitations of the proposed work are discussed in the Conclusion session.\nThis work does not have potential negative societal impacts.",
" This paper focuses on the problem of periodic graph generation, and proposes PGD-VAE, a Variational AutoEncoder based generative model for periodic graphs that can automatically learn, disentangle, and generate local and global patterns. The authors formulate the generation of periodic graphs as the generation of a finite number of basic units. ## Strengths\n* originality: This paper formulates a new problem setting for periodic graph generation. Instead of generating basic units and repeating patterns, they generate the periodic graph consist of a fixed number of basic units. \n\n## Weaknesses\n* According to the authors, this paper's main idea of the generation process is to make the generation of local structures and global repeating patterns independent. But they introduce a contrastive loss to make the generation of local structures actually **conditioned** on the global repeating patterns.\n* In this paper, the generation of node type such as atom type in QMOF is ignored. Only the structure is generated.\n* Permutation invariance is ignored. The proposed method can not satisfy permutation invariance when the node type needs to be considered. The application of the proposed method is limited because of breaking permutation invariance. \n* Because of above considerations, the quality and significance of the proposed method is limited.\n* Clarity: Hard to read. There are variables coming from nowhere and not shown in the equation, such as Equation (8). Many sentences hard to understand.\n * If the goal is to make the generation of local structures and global repeating patterns independent, why do you introduce a contrastive loss to make the generation of local structures conditioned on the global repeating patterns? Yes, they addressed the limitations of the proposed method.",
" The paper proposes to tackle periodic graph generation, by learning to generate basic structural units and to compose them into a larger graph. This is achieved by using a hierarchical VAE approach with a loss term which encourages disentanglement. Mostly, current deep graph generation approaches aim to generate the graph by focusing on local structure so any attempt to introduce more focus on the global structure or decouple global and local structure generation is very welcome. Recently, a more general approach to decouple local and global structure generation, not focused on periodic graphs has been proposed [1]. I think this should be mentioned in the related work. I haven't seen other work that focuses in particular on periodic graph generation using deep learning. Which makes this particular problem quite novel and potentially useful in problems such as crystal structure design.\n\nWhat I find problematic, is the assumption that the node order in the graph is known and can be consistently determined.\nFirst, it limits us to a small set of problems where we can even hope to determine such an ordering (e.g. atom lattices). Further, even for the problems considered here it's not entirely clear to me that we can determine such an ordering. For example lets consider the MeshSeq dataset. In this dataset we have a sequence of cuts people used to partition a given mesh. Even though the cuts are sequential, they gives us two equivalent parts (we build a binary tree, not a sequence) and depending on the particular mesh of the same type of object or its orientation which part is the \"left\" of the split and which is \"right\" could differ. Also, In general for mesh processing various equivariant architectures are normally used, due to the problem that in general we don't have a way to find a canonical orientation (ordering) of a mesh. So I don't see how we can get a canonical representation of a mesh here to use a loss that is not invariant to permutations.\nIn context of this an important reference [2] discusses, that for graph generation, you need a permutation invariant loss function. Which we do not have in the approach proposed here. I also think this reference should be included in the paper.\nThe second problem I see with assuming that we can get this canonical graph structure (ordering) is that in this case we are essentially just dealing with a set of matrices. So we can just use a traditional and very powerful matrix processing models such as convolutional neural networks. In fact the proposed architecture is a bit like a CNN already as it considers a bunch of patches and how they combine/overlap. So under this assumption I'm not even sure we need a GNN-based architecture.\n\nA small note on the Global pattern encoder - it's true that using GIN and global pooling gives you 1-WL representation power, but 1-WL is inherently a local procedure, so I'm not sure that's the best architecture to capture the global graph structure. Models that usually perform better on graph-level tasks could be considered, such as the more expressive GNN architectures or graph transformers, which do not suffer from having only local interactions.\n\nWhy only two graph similarity measures (density and clustering coefficient) are used? Even the baselines that were used such as GRAN used more advanced measures, such as orbital counts or spectral features. Using only density and clustering coefficient are determined by local features, so there are no measures that show how well the global graph structure is reproduced. Which is one of the main goals of the work.\n\nI'm also confused why the GraphVAE baseline was not used. It certainly has the closest overall architecture (GNN encoder + MLP decoder). Authors argue that it is too expensive, but the expense comes from the graph matching used to compute a permutation invariant loss. But authors claim, that for the graphs they consider we can assume the node order to fixed as otherwise their proposed method does not work. Also, GRAN used GraphVAE as a baseline on datasets with graphs as large as the ones considered in this paper (Protein graphs used in GRAN paper had up to 500 nodes). In GRAN paper they also just discarded GraphVAE matching loss and used the node ordering as given to GRAN model itself, which worked fine.\n\nWriting quality could certainly be improved. The text is wordy, there's a lot of typos, sometimes words are missing in the sentences, even multiple words per sentence (e.g. line 125). I would strongly suggest the authors to at least use some sort of a spelling and grammar correction software (e.g. Grammarly).\n\n[1] Martinkus et al. \"SPECTRE: Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators.\" ICML 2022\n[2] Vignac and Frossard \"Top-N: Equivariant Set and Graph Generation without Exchangeability\" ICLR 2022 How exactly is m an n selected for each dataset? As I understood it, you use analytic methods (domain knowledge) to determine the optimal substructure sizes. This already seems to give a lot of ground-truth information for the model.\n\nWhat ordering do the GraphRNN and GRAN use? The default BFS/DFS orderings that were originally used for these models or do you also provide them with the true graph ordering you use for your model?\n\nIn Figure 3, are the trees in the lower left corner part of the dataset? It is unclear which dataset this figure comes from. I presume it to be the synthetic one, but it does not have trees. If it is the synthetic dataset, it seems strange trees are generated even though all base units in the dataset where polygons. As I mentioned in the weaknesses, the assumption that the adjacency matrix has a true canonical ordering is quite problematic. Even though for regular graphs in some cases we have something close to a canonical ordering. It essentially goes against all of the graph machine learning wisdom that is based on equivariance to the point that it is unclear we even need graph machine learning to solve this problem when this assumption trully holds."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
4
] | [
"exo0JpbvG_5",
"nips_2022_lgNGDjWRTo-",
"keqGZfvVTeX",
"tSRw8S7UH8b",
"NUJKYywLdl",
"YsfKj3yNxLw",
"q5Fs1EMrkF",
"n__G6ZvxzQB",
"q5Fs1EMrkF",
"KRQFV49EHnO",
"q5Fs1EMrkF",
"q5Fs1EMrkF",
"q5Fs1EMrkF",
"EtEJwQ9xCNM",
"SnyJmnu75DC",
"tSRw8S7UH8b",
"nips_2022_lgNGDjWRTo-",
"nips_2022_lgNGDjWRTo-",
"nips_2022_lgNGDjWRTo-",
"nips_2022_lgNGDjWRTo-"
] |
nips_2022_fp33Nsh0O5 | Deep Generalized Schrödinger Bridge | Mean-Field Game (MFG) serves as a crucial mathematical framework in modeling the collective behavior of individual agents interacting stochastically with a large population. In this work, we aim at solving a challenging class of MFGs in which the differentiability of these interacting preferences may not be available to the solver, and the population is urged to converge exactly to some desired distribution. These setups are, despite being well-motivated for practical purposes, complicated enough to paralyze most (deep) numerical solvers. Nevertheless, we show that Schrödinger Bridge — as an entropy-regularized optimal transport model — can be generalized to accepting mean-field structures, hence solving these MFGs. This is achieved via the application of Forward-Backward Stochastic Differential Equations theory, which, intriguingly, leads to a computational framework with a similar structure to Temporal Difference learning. As such, it opens up novel algorithmic connections to Deep Reinforcement Learning that we leverage to facilitate practical training. We show that our proposed objective function provides necessary and sufficient conditions to the mean-field problem. Our method, named Deep Generalized Schrödinger Bridge (DeepGSB), not only outperforms prior methods in solving classical population navigation MFGs, but is also capable of solving 1000-dimensional opinion depolarization, setting a new state-of-the-art numerical solver for high-dimensional MFGs. Our code will be made available at https://github.com/ghliu/DeepGSB. | Accept | This paper proposes a novel, simple but effective algorithm to solve Mean Field Games. Reviewers found the paper well written, presenting an exact and flexible method. Despite its simplicity, the method solves a wide class of MFGs. Authors were also able to demonstrate the computational advantage of their method, providing good experimental data. | train | [
"gAWjrjza-hM",
"gBTXQUzSeS1",
"Bx3B5odoKgC",
"geMijoTY4Rh",
"pq9PwlJYopu",
"uhEmeAozQbH",
"ioWyiu1yePz",
"fDp1CYCWlf",
"iv6LpODV-h",
"236QopZ6zHm",
"QqluvKeGJfDt",
"3yGUYZeDsQ9",
"5IS0Q1Td2qi",
"gIJudO1oveOM",
"3iHrvuSBbV7y",
"7xqRbkq4hE",
"IvGjllPnU4",
"kZzHQhayjgCB",
"xPs93ck2PKY",
"fdCEu3U3JMQ",
"hWkBAkB7eew",
"xNPEYkus7s5"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the reply. We are pleased that the reviewer appreciated our clarifications, and we greatly appreciate the reviewer's willingness to raise the score. To ease the reviewer's burden, we provide the following list of content, linking each of our responses to the [_section, line, page_] in the current revision. All notable changes before App A.5 are marked blue. Admitted that we are still limited to 9 main pages during this stage, we plan to move some of these clarifications from Appendix to the main paper once the page limit is relaxed.\n\n---\n\n**1. Comparison to the IPF used in FBSDE [1] and DSB [2]**\n\n- [_Sec 3.3, L155-161, p.5_] Difference between the mean-field FBSDE in Theorem 1 & prior FBSDE [1].\n\n\n- [_Sec 2, L106-108, p.3_] & [_App A.2, L517-539, p.16-17_] Relation between IPF objective (Eq 6), variational ELBO (Eq 24), and KL (Lemma 5).\n\n\n- [_App A.5.3, L726-745, p.28_] Discussions on DSB & SB-FBSDE, and how limitations of DeepGSB could be mitigated.\n\n---\n\n**2. Training procedure of DeepGSB (and FBSDE)**\n\n- [_App A.4.1, Table 6, p.22_] Additional hyper-parameters used in training DeepGSB. We highlight the fact that the training iterations per IPF (_i.e._, $K$ in Alg 1) is kept _fixed_ below Table 6, in Footnote 7.\n\n---\n\n**3. Discussion on scalability & limitation**\n\n- [_Sec 5, L281-285, p.9_] Limitation of the method in terms of unconstrained state space & computational complexity (and how the regression objectives in DSB could be adopted for improvement; also see [_App A.5.3, L742-745, p.28_]).\n\n---\n\n**4. Other clarifications**\n\n- [_App A.5.4, Fig 9, p.29_] Training progress of DeepGSB over IPF iterations (to show that DeepGSB converges rather smoothly over iterations and does not require long iterations in the first stage or fine-tuning).\n\n\n- [_App A.5.4, Fig 10-11, p.29_] Examples of DeepGSB trained without $\\widehat{\\text{TD}}\\_0$ and $\\text{TD}\\_0$.\n\n\n- Typos in L4, L37, L73, and L197 (previously L203) have all been corrected.",
" Thank you for the clarifications.\n\nI have not had time to read through the revisions in the paper. But will do so tomorrow, providing the review response is reflected in the paper revision I will increase my score.",
" I would like to thank the authors for a very thorough reply (mainly, to the first two items under my Weakness section). The computational advantages and $\\mathcal{W}_2$-based performance are shown very clearly via the extra experiments. I will raise my initial score.",
" \nWe thank the reviewer for the prompt updates and comments. Though the reviewer noted that these questions are optional, we are happy to provide additional clarification to those specifically related to FBSDE -- as it serves as the backbone of our method.\n\n---\n\n**9. Reference process in FBSDE-based methods [27]**\n\n- We appreciate the reviewer for sharing these intriguing first-hand accounts. The reviewer's observation does seem to support our interpretation of Prop 3 -- that SB-FBSDE [27] (which only minimizes $\\mathcal{L}\\_{IPF}$) may converge to a \"bridge\" but not necessarily an \"S\"B, which, in this case, shall respect the reference process. As the reviewer noted, the closeness to the reference process was enforced in SB-FBSDE [27] only _implicitly_ at initialization but never appeared as an objective in subsequent training. Though the procedure is justified in SB-FBSDE [27] by drawing connections to IPF, we suspect that there may be many other practical factors, such as the relatively simple Euler discretization of SDEs or insufficient minimization/convergence of each stage, that prevent the statement from holding in practice.\n\n\n- In this view, the combined loss we propose (and validate via Prop 3) ought to likewise aid regular SBs (and SB-FBSDEs [27]) in better respecting their reference processes. Specifically, this is reinforced _explicitly_ in the newly introduced TD objectives (notice the $\\nabla\\cdot f$ in Eq 14 & 16, where $f$ is the drift of reference process) that are otherwise absent in the objectives of [27]. We agree with the reviewer that these discussions are indeed worth highlighting, and we will include them in the later revision (after Prop 3 is introduced), once the page limit is relaxed. We thank the reviewer again for raising these comments.\n\n---\n\n[27] Likelihood training of SB using FBSDE\n",
" As before thank you for the prompt updates. I made some initial updates to the score however please bare with me I am still not quite done going through all the changes, I just wanted to give an initial score update based on what I have processed so far. \n\nAgain some more questions (apoologies):\n\n> Our interpretation (to Prop 3) is that, the minimizer of in Eq 6, as implied in Lemma 5 (also see Eq 44,45 in App A.5.2), should always establish a valid \"bridge\" transporting between marginal densities and ; yet, without further conditions, this bridge needs not be a \"Schrödinger\" bridge\n\nThis is indeed very helpful and I suspected something similar. From my personal experience when working with the SB-FBSDE and settign up MSE objectives to enforce its fixed points we found that we \"lost\" the SBP prior along due to cancelings in the objective thus the objective only enforced the boundary conditions of the SB but it was no longer bound to a reference process (which could be cause for instability). A couple of points worth pondering/clarifying:\n\n1. It is possible to write a full coupled HJB system / FBSDE whose fixed point satisfies a full bridge with a corresponding reference process. This is the case in your theorem 1 \n2. Unfortunately when deriving objectives for the fixed point it does seem that the reference process / prior is lost and only the boundary constraints are enforced. This is not an issue with the FBSDE theorem / HJB system its an issue with the derived objective. I believe this is worth highlighting. For example this objective could not be used to justify [3] as it would not quite recover an SB, however you provided an IPF justification for [3], where initialisation is what enforeces the reference process.\n3. It does seem intutive that minimising your proposed objective (jointly or coord ascent)+ initialisatising the process at the reference process would recover SB however its not something that may be necesirily true as it has not been quite formalised, so we need to be a bit more up front about what is exactly and formally learned/enforced even when doing the right thing (joint training / coordinate ascent). It is possible its not so hard to prove as the initialisation + updates being contractive make the solution unique, however is that unique solution exactly SB with the initialised SDE as its reference ?\n\nYou dont have to answer these as a lot of this is future / potentially open work that I personally find very interesting and your work very nicely sheds light to, but please do be clear about some of these gaps/what is missing/open in your final version, my initial score was so harsh because on a first glance it was super unclear what this converges to, whilst it has improvedd dramatically there are still some little remarks / wonders regarding convergence.",
" We thank the reviewer for the fast reply. We are pleased that the reviewer appreciated our new presentation and theoretical support. We greatly appreciate the reviewer's willingness to raise the score. Additional clarification is provided below.\n\n---\n\n**5. DSB and Eq 26**\n\n- As recognized by the reviewer, the objective of DSB indeed does not contain divergence. Here, we meant that _both_ Eq 26 and DSB can be derived from $ \\min_\\phi KL(q^\\theta | q^\\phi) = \\int \\mathbb{E}[||Z_\\theta + \\widehat{Z}_\\phi - \\sigma \\nabla \\log q^\\theta||^2] \\mathrm{d}s$ on the $s$ coordinate with $\\nabla \\log q^\\theta$ (rather than $t$ and $\\nabla \\log q^\\phi$; see our Response **2.3**). While Eq 26 replaces the intractable $\\nabla \\log q^\\theta$ with divergence using integration by parts, DSB instead performs $\\mathbb{E}[||Z_\\theta + \\widehat{Z}_\\phi - \\sigma \\nabla \\log q^\\theta||^2]$ $\\propto$ $\\mathbb{E}[||Z\\_\\theta + \\widehat{Z}\\_\\phi - \\sigma \\nabla \\log q^\\theta(X\\_k|X\\_{k-1})||^2]$, then estimates $\\nabla \\log q^\\theta(X^\\theta\\_k|X^\\theta\\_{k-1})$ with _forward_ samples $(X\\_{k-1}^\\theta, X\\_k^\\theta, Z\\_\\theta)$ (as it is a tractable Gaussian kernel). We refer the reviewer to App. C in [27] (see their Eq 55,56) for more details. Finally, we note that the \"$\\propto$\" step that replaces density with conditional density is a common practice adopted in most diffusion models [8,9,10].\n\n\n- Both $dX$ and the boundary assumptions are updated (L530,L525-526,L531-532). We thank the reviewer for the meticulous reading.\n\n[8] Connection bw score matching & denoising AE \n[9] SGM through SDE \n[10] Generative model by estimating grad of data distribution\n\n---\n\n**6. Proposition 3**\n\n- Prop 3 can be found in L183-188 on page 6 (proof in App. A.5.2 on page 25). We fix a minor notational error in Eq 44,45 (marked blue). This does not affect any validity of the proof.\n\n\n- As conjectured by the reviewer, Prop 3 can indeed be applied to regular SB, in which $F:=0$ vanishes. Our interpretation (to Prop 3) is that, the minimizer of $\\mathcal{L}\\_{IPF}$ in Eq 6, as implied in Lemma 5 (also see Eq 44,45 in App A.5.2), should always establish a valid \"bridge\" transporting between marginal densities $\\rho_0$ and $\\rho_{target}$; yet, without further conditions, this bridge needs _not_ be a \"Schrödinger\" bridge. While coordinate ascent, upon proper initialization & assumptions, provides an elegant way to ensure the convergence toward the \"S\"B, Prop 3 suggests an alternative by introducing TD objectives. This, as shown in this work, gives us flexibility to handle generalized SB in MFGs where $F$ becomes nontrivial. Further, it naturally handles non-differentiable $F$, which can offer extra benefits in many cases.\n\n---\n\n**7. Coordinate ascent on $(\\theta,\\phi)$**\n\n- We thank the reviewer for the great comments! Indeed, as both losses are in fact $\\mathcal{L}(\\phi; \\theta)$ and $\\mathcal{L}(\\theta; \\phi)$, it could be possible to apply coordinate ascent (or joint training) on $(\\theta,\\phi)$ to offer sound convergence statements. These are exciting comments that, in conjunction with our established results, provide a comprehensive view. Essentially, one may regard them as 2 distinct algorithms (crucially, _both_ are inspired by Prop 3) that trade-off differently between computational efficiency and convergence.\n\n\n- To provide additional support on computational efficiency, we report the runtime (sec/itr) of detached (first table) and attached (second table) training w.r.t. different dimensions $d$ and diffusion steps. We note that the runtime is typically 20-50 faster for detached updates, and that we are unable to obtain results for attached training beyond steps=300, $d$≥250 due to the out of memory on our GPU (TITAN RTX, 24G).\n| | steps=100 | 200 | 300 |\n|------|-----:|-----:|-----:|\n| $d$=2 | 0.03 | 0.05 | 0.06 |\n| 250 | 0.04 | 0.08 | 0.13 |\n| 500 | 0.06 | 0.13 | 0.18 |\n| 1000 | 0.11 | 0.21 | 0.3 |\n\n\t| | steps=100 | 200 | 300 |\n\t|------|-----:|-----:|-----:|\n\t| $d$=2 | 1.32 | 2.61 | 2.83 |\n\t| 250 | 1.38 | 3.1 | N/A |\n\t| 500 | 1.42 | 3.2 | N/A |\n\t| 1000 | 1.68 | 3.86 | N/A |\n\n\n- We fully agree with the reviewer that convergence guarantee could be useful for cases that, _e.g._, require low variance, yet we also wish to note that it can be equally important for other applications, especially like the high-dimensional problems concerned in this work, to take into account the computational complexity. These clarifications will be added to the main paper in the later revision (admitted that we are still limited to 9 main pages during rebuttal), and we will ensure they are properly acknowledged to the reviewer. Again, we always appreciate the reviewer's precious time in providing their valuable feedback.\n\n---\n\nWe thank the reviewer again for all the comments. If our replies adequately address your concerns, we would like to kindly ask the reviewer to raise the score, so that it better reflects the discussion at the current stage.\n",
" > ... This is a necessary and sufficient condition ...\n \nProposition 3 does indeed seem to bridge the gap from theorem 1 to alg 1. This result is intorduced the revised version correct ? most of my criticisms can be sumarised as \"wee need propsition 3\". \n\nI need to spend some additional time going through the derivation this combined objective/proof seems to lacking from [3]. My main confusion with this result is that if true then it would also apply to the non mean field setting just regular SBP, and that implies you have derived a rather nice/unconstrained objective for SBP which admits attached updates (something that could be useful in certain applications requiring low variance/etc). \n\nI still have one more concern. From my understanding (and please clarify if wrong) in L_IPF(\\theta) the expectation is taken wrt \\phi whilst in L_IPF(\\phi) the expectation is with respect to \\theta ? so both loss terms depend on \\phi and theta correct ? if this is the case then consider the following options/questions:\n\n1. I apply coordinate ascent on \\phi, \\theta . This gives a sound algorithm with convergence gaurantees much like IPF (under a lot of extra assumptions) . However since L_IPF(\\theta) is a function of \\phi and L_IPF(\\phi) of \\theta both will appear in each of the coordinate updates, thus the resulting algorithm is not quite the same as the one you propose since in your algo when you update \\theta , L_IPF(\\phi) does not contribute and vicevers as you have detached the expectation. I know you made a comment on how using the previous detached \\theta_is and phi_is respectively makes the policies get closer each time but this is a heuristic argument. Formally doing coordinate ascent here would enjoy strong gaurantees, whilst algorithm 1's detaches dont seem to, I understand this part is IPF inspired.\n2. Taking gradients jointly on \\phi and \\theta would also enjoy sound gaurantees, but again this is not what you do.\n\nIn short I think its important to higlight this specially the point made in (1.) above. That is carrying out coordinate ascent on the objective of prop 3 would enjoy nice convergence gaurantees similar to those of IPF however that does NOT reduce to the alternating heuristic proposed in algorithm 1 which is indeed a heuristic algorithm that lacks convergence gaurantees. Nonetheless you can claim how its partly motivated from prop 3, coordinate ascent and the nice detached iterations in IPF. \n\nDoes this make sense ? to reiterate my main point here is that if you applied sound optimisation schemes on (theta,\\phi) for prop 3 you would end up with attached path expectations and quite different updates to Algorithm 1. I think its very important to be clear/upfront about this. The results are really strong and the heuristic stems from a good place so its fine but a bit more clarity is needed and it would be nice to compare results with exact coordinate ascent on (theta,\\phi) to ablate how much is lost by doing these detaches. \n\n[3] MLE of SGM\n",
" I am very happy with this rebutal overall the changes have hugely improved the readability of the paper. When I wrote the first review I was erroneously under the impression that the rebutal system was like in ICML (just upload comments rather than a revision of the paper) thus why I was not inclined to re-evaluate. As you have addressed almost the entirerity of the points I will of course be updating (specifically increasing) my score, the quality of the paper has significantly increased (in terms of clarity, the technical strengths are now more visible).\n\nI will need a bit more time to process all the comments carefully but just wanted to give a quick update to enphasize that I will be re-assesing and updating the score positively.\n\nSome initial high level questions I still have: \n\n1. You mention a the derivation of eq 26 appeares in the DSB paper ([4] ) but not precisely where ? (worth clarifying if possible, Eq 26 involves a divergence and from my recollection DSB does not have any / does not focus on the divergence based replacement of scores) I can see that the same result (different strategy) is derived in [5] as you have pointed out here and in the updated appendix. I recognise your proof approach is different and arguably (when written clearly enough the length difference in the proofs is not all that much) more succint (avoids relying on FPE manipulations / results) however it might be important recognising the lemma in [5] as the initial work postulating this exact result which you then prove in a cleaner fashion (and expand tthe time coordinate differently). \n2. Some very minor comments on the derivation:\n a. dX with capital X makes this look like a stochastic integral ? maybe consider lower case or use Lebesgue-Stieltjes notation (dq) (completely optional)\n b. In the integration by parts , for the boundary term to vanish you need certain regularity (continuity/boundness) assumptions on ln q and Z it may be helpful for the reader to state these / link back to the standard Lip/Lingrowrth assumptions on the drifts for SDEs. Again this is an optional comment.",
" **3. New theoretical results/remarks**\n\n- We thank the reviewer for bringing up these comments. In Proposition 3 (L185-186), we prove that the minimizer of the combined loss indeed satisfies the FBSDEs derived in Theorem 1. This is a necessary and sufficient condition, and it asserts the validity of DeepGSB. Below we briefly summarize the proof (for the complete proof, please refer to App. A.5.2): Since (11c, 12b) is directly used to build the TD objectives, the majority of the proof is devoted to showing (11b,12c) shall also hold. This can be achieved by _(i)_ applying Ito lemma and FPE to derive dynamics of the parameterized marginal density (Eq 49), then _(ii)_ proving that the residual of (11b,12c), derived analytically in Eq 52, should vanish; hence (11b,12c) are satisfied.\n\n\n- As discussed in **2.4**, the combined objective cannot be interpreted as (reversed) KL/IPF objectives; thus, similar convergence results to IPF may be difficult to establish. Nevertheless, the interpretation that DeepGSB iteratively seeks a policy that, while being close to previous policy, improves TD optimality on off-policy trajectories suggests a close relation to TRPO [7], which was proven to enjoy monotonic improvement over iterations (i.e., local convergence). For general MFGs where the uncontrolled dynamic $f := f(x,\\rho)$ depends on population density $\\rho$, both methods (DeepGSB and Sinkhorn, e.g. [16]) admit local convergence. When the MFG is simplified such that its discrete problem admits convex structure, we observe empirically that DeepGSB still converges smoothly (see Fig 10 in App. A.5.4), whereas the reconstructed policy from [16] can suffer from discretization errors.\n\n---\n\n**4. Other clarification**\n\n- Time indices are added to $Y_t$ and $\\widehat{Y}_t$ in L139. We thank the reviewer for the suggestion. Additionally, all notations in Sec 3.3 are updated with parameters $(\\theta,\\phi)$ to distinguish between parametrized function vs. solution to Theorem 1.",
" **2.3 Derivation of Eq 26**\n\n- In the revision, we provide in detail the derivation of Eq 26, as well as the steps of integration by part, in order to obtain the final results. Proper citations to [2,3] have been added, and we thank the reviewer for providing these references.\n\n\n- We note that the derivation of Eq 26, which also appears in Diffusion SB [4], differs slightly from the standard IPF treatment (see _e.g._ Prop 1 in Sec 6.3.1 [5]) in that the KL divergence is expanded (using Girsanov Theorem) with respect to SDEs on different time coordinate. Specifically, let $q^\\theta$ and $q^\\phi$ be the path densities of the parametrized forward SDE, $\\mathrm{d}X_t^\\theta = Z_\\theta \\mathrm{d}t + \\cdots$, and parametrized backward SDE, $\\mathrm{d}X_s^\\phi = \\widehat{Z}_\\phi \\mathrm{d}s + \\cdots$. DSB (see Prop 3 in [4]) and our Eq 26 propose to expand $KL(q^\\theta | q^\\phi) = \\int \\mathbb{E}[||Z_\\theta + \\widehat{Z}_\\phi - \\sigma \\nabla \\log q^\\theta||^2] \\mathrm{d}s$ on the $s:= T-t$ coordinate, whereas the approach suggested by the reviewer (and [5]) propose to expand $KL(q^\\theta | q^\\phi) = \\int \\mathbb{E}[||Z_\\theta + \\widehat{Z}_\\phi - \\sigma \\nabla \\log q^\\phi||^2] \\mathrm{d}t$ on the $t$ coordinate. While both expansions eventually lead to the _same_ expression, as recognized by the reviewer, our derivation is slightly more compact as it does not invoke FPE etc. In the revision, we include both derivations for completeness (see Lemma 5, Lemma 6 & Prop 7 in App A.2; all marked blue), yet we urge the reviewer to recognize the difference.\n\n---\n\n**2.4 Difference between Alg 1 and IPF**\n\n- Using the notations in **2.3**, it should be clear that the alternating optimization scheme proposed in Alg 1 can be expressed succinctly as $\\min\\_\\phi KL(q^\\theta | q^\\phi) + \\mathbb{E}\\_{q^\\theta} [\\mathcal{L}\\_\\text{TD}(\\phi)]$ and $\\min\\_\\theta KL(q^\\phi | q^\\theta) + \\mathbb{E}\\_{q^\\phi} [\\mathcal{L}\\_\\text{TD}(\\theta)]$. Despite the fact that the procedure appears to be similar to IPF, which optimizes between $\\min_\\phi KL(q^\\phi | q^\\theta)$ and $\\min_\\theta KL(q^\\theta | q^\\phi)$, we stress that they differ from each other in that the the KLs are constructed in _**different**_ directions.\n\n\n- In cases where the TD objectives are discarded, prior work [4] has _proven_ that minimizing the forward KLs admits similar convergence to standard IPF (which minimizes the reversed KLs). This is the key to developing scalable methods, since the parameter being optimized (e.g. $\\theta$) in forward KLs differs from the parameter used to sample expectation (e.g., $\\mathbb{E}\\_{q^\\phi}$). Therefore, the computational graph of the SDE can be dropped, yielding a computationally efficient framework. These advantages have been adopted in [4,27] and also in this work for solving higher-dimensional problems.\n\n\n- To emphasize the additional runtime complexity that could be introduced from retaining the graph, the table below reports the per-iteration runtime (sec/itr) between [6], which is an FBSDE method that optimizes reversed KLs by retaining computational graphs, and our DeepGSB, which discards the graph and instead solves forward KLs. For fair comparisons, the dynamics are discretized into the same diffusion steps. It is clear that [6] exhibits a much longer per-iteration runtime (~10 times longer than our DeepGSB), which can prevent these methods from scaling to higher-dimensional MFGs.\n| | [6] | ours |\n|-------------------|------|------|\n| runtime (sec/itr) | 0.34 | 0.04 |\n\n\n- However, when we need TD objectives to enforce the MF structure, as appeared in all the MFGs in this work, the combined objective, _e.g._, $\\min\\_\\phi KL(q^\\theta | q^\\phi) + \\mathbb{E}\\_{q^\\theta} [\\mathcal{L}\\_\\text{TD}(\\phi)]$, does not correspond to IPF straightforwardly. Despite the fact that the alternating procedure in Alg 1 is mainly inspired by prior SB methods [4,27], the training process of DeepGSB is perhaps closer to TRPO [7], which iteratively updates the policy using the off-policy samples generated from the previous stage: $\\pi^{(i+1)} = \\arg\\min_\\pi KL(\\pi^{(i)} | \\pi) + \\mathbb{E}\\_{\\pi^{(i)}} [\\mathcal{L}(\\pi)]$. We provide implication on the convergence analysis in **3**.\n\n---\n\n[2] Variational perspective of diffusion models \n[3] MLE of SGM \n[4] Diffusion SB \n[5] ML approach for empirical SBP \n[6] Deep Graphic FBSDE \n[7] Trust Region Policy Optimization",
" **1. Summary of new presentation**\n\nWe first thank the reviewer for the valuable comments. We agree with the reviewer that the discussions on IPF in Sec 3.2 were unclearly stated in the initial submission, which led to unnecessary confusion and could mislead readers. These were never what we intended to emphasize nor imply. In the revision, we rewrite Sec 2, Sec 3, and App. A.2 carefully. This includes\n\n- The original small paragraph on IPF was removed completely. Instead, we only comment on the suitability of the _IPF-like_ objectives in [27] in solving our FBSDE systems (L155-161).\n\n\n- Solvability of general IPF/Sinkhorn to the MFG-PDE is mentioned in Sec 3.1 (footnote 4), with a reference pointing to App. A.5 for more details. Admitted that we're not allowed to add new page during rebuttal, we will move these discussions to main paper once permitted.\n\n\n- Derivation of Eq 26 (relation between Eq 6 & KL) is made formally in Lemma 5 (App. A.2, L524-534). For completeness, we also include an alternative derivation suggested by the reviewer in Lemma 6 & Prop 7 (App. A.2, L535-546). As this is an important concept that will help in understanding Alg 1, we provide a few comments in the main paper (L106-108 & L135) after proper contexts are introduced.\n\n\n- Sec 3.3 (Design of computational framework) has been substantially revised, where the logic flow now follows:\n - We first explain the difficulty encountered when training with prior objectives (6) from [27]. We only compare to [27] -- as a prior FBSDE-related method. (L151-161)\n - Motivated by these insights, we draw new connections to TD learning and then introduce proper TD objectives (16) for our problems. (L162-182)\n - Next, we establish _new_ theoretical results for the combined loss, (6) + (16), showing that its minimizer provides necessary and sufficient conditions for the FBSDE systems in Theorem 1. This asserts the validity of our DeepGSB. (L183-188; proof is left to App. A.5.2)\n - Finally, we explain (briefly) the difference between the resulting algorithm (Alg 1) and standard IPF, and draw connections to TRPO, which provides initial remarks for convergence analysis (L188-192 & App A.5.3).\n\n---\n\n**2. Clarification on IPF, Sinkhorn, and [27]**\n\n**2.1 Difficulty when optimizing with \"IPF-like\" objectives in [27]**\n\n- We first clarify the difficulty faced in the IPF-like objectives in [27], which is the main message we wish to convey. The mean-field FBSDE (MF-FBSDE) systems derived in Theorem 1 differ from [27] only in _(i)_ the addition \"$+F_t$\" in $\\mathrm{d}Y_t$'s SDE and _(i)_ the subtraction \"$-F_t$\" in $\\mathrm{d}\\widehat{Y}_t$'s SDE, where $F_t(x,\\rho)$ is the MF interaction. Hence, if one were to follow similar derivation (see Thm 4 in [27]) in an attempt to derive IPF-like objectives (admittedly they appear more like _variational ELBO_), $\\log \\rho_0 \\ge \\mathcal{L}(\\theta,\\phi) := \\mathbb{E}[\\int \\mathrm{d}Y^\\theta\\_t + \\mathrm{d}\\widehat{Y}\\_t^\\phi]$, the two terms \"$+F_t$\" and \"$-F_t$\" will eventually cancel out with each other regardless of the parametrization ($\\theta$,$\\phi$) or the choice of $F_t$. Algorithmically, this implies that naively adopting the objectives from [27] will not work for the MF-FBSDE.\n\n\n---\n\n**2.2 Solvability of general IPF & Sinkhorn to Eq 7**\n\n- Having said that, we agree with the reviewer that general IPF, as a coordinate ascent method, can still be used to solve these MFGs, which can be seen as generalized SB with nontrivial state cost. In fact, this is precisely the method proposed in [16], which we detailed in Table 1 (also App A.5.3) and compared extensively with DeepGSB in Fig 5. [16] suggests that, upon discretization in both state space and time, Sinkhorn converges to the global solution, providing that _(i)_ the mean-field preference $\\mathcal{F(\\rho)}$ (see Appendix A.3.3) is convex in the population density $\\rho$, and _(ii)_ the base dynamics are independent of population density, $f:= f(x)$ in Eq 7. These conditions ensure that the discretized problem remains convex and, from there, IPF results can be applied. Finally, we note that for general MF dynamics, such as the polarized $f(x,\\rho)$ in our Eq 18, the problem is in general non-convex; hence only local convergence can be established (see Remark 1 in [16]).\n\n\n- It should be emphasized that the results from [16], despite being elegant and promising, require discretization both spatially and temporally. As its complexity scales as $\\mathcal{O}(D^2T)$, where $D$ is the numbers of spatial grids (which grows exponentially with the dimension $d$), their method soon becomes prohibitively expansive as $d$ grows. In practice, we observe that the reconstructed policy from the _discrete_ marginal density, following the algorithm proposed in Sec III-C [16], can perform poorly due to discretization, especially when the dynamics are further subjected to stochasticity.\n\n---\n\n[16] Density Control of agents \n[27] Likelihood training of SB using FBSDE",
" **3. MFGs with soft constraint**\n\n- It is possible to solve MFGs with soft constraints using FBSDE frameworks. In such cases, one may directly apply the nonlinear Feynman-Kac lemma to Eq 1 and derive its corresponding FBSDEs. However, as shown in prior works [11,12], only one FBSDE system can be constructed, as the trajectories can only be simulated forwardly from the initial distribution $\\rho_0$. Algorithmically, this makes scalable methods such as IPF unapplicable, and one must differentiate through the computational graph (as in [1,2]) to obtain gradients.\n\n\n- To emphasize the additional runtime complexity that could be introduced from retaining the graph, the table below reports the per-iteration runtime (sec/itr) between [12], which is an FBSDE method that optimizes reversed KLs by retaining computational graphs, and our DeepGSB, which discards the graph and instead solves forward KLs. The values are measured on the same GPU machine (TITAN RTX), and the dynamics are discretized into the same diffusion steps for fair comparisons. It is clear that [12] exhibits a much longer per-iteration runtime (~10 times longer compared to our DeepGSB), which can prevent these methods from scaling to higher-dimensional MFGs.\n| | [12] | ours |\n|-------------------|------|------|\n| runtime (sec/itr) | 0.34 | 0.04 |\n\n\n- Lastly, we note that the distance to the target density at the terminal time stands as a key evaluation metric in prior ML-based methods [1,2], often emphasized by visualizing the population snapshots at terminal or reporting the numerical KL divergence to the target density. Despite the fact that these methods [1,2] were proposed to solve _soft_ distributional constraint, the corresponding MFGs were often tuned with a rather large terminal penalty to ensure that their methods converged to $\\rho_\\text{target}$. In this view, DeepGSB shares a similar motivation as [1,2] (in satisfying target distribution in MFGs) but solves the problems via a more principled framework under optimal transport and mean-field SB.\n\n---\n\n[11] Convergence of MFG - The Finite Horizon Case \n[12] Deep Graphic FBSDE",
" **1. Access to target density $\\rho_\\text{target}$**\n\n- We thank the reviewer for raising the comment (also raised by Reviewer TjfA). We first note that availability to target density is a common assumption currently adopted in most ML-based solvers [1,2], in which the target density is involved in computing the terminal cost $KL(\\rho_T|\\rho_\\text{target})$. Yet, admitted that the target density may not be available for some applications, we stress that our DeepGSB can work _without_ $\\rho_\\text{target}$ so long as we can sample from $X_0\\sim \\rho_0$ and $\\bar{X}_0\\sim \\rho_\\text{target}$ (similar to generative modeling).\n\n\n- In Fig 10 (see Appendix A.5.4 in the revision), we show that the DeepGSB trained _without_ the initial and terminal densities can converge equally well. Crucially, this is because DeepGSB replies on a variety of other mechanisms (_e.g._, self-consistency in single-step TD objectives & KL-matching in IPF) to generate equally informative gradients. This is in contrast to [1,2], where the training signals are mostly obtained by _differentiating_ through $KL(\\rho(X_T)|\\rho_\\text{target}(X_T))$ (which is not required in DeepGSB); consequently, their methods fail to converge in the absence of $\\rho_\\text{target}$.\n\n\n- The choices of Gaussians in our experiments were made only to match those setups from [1,2], so that DeepGSB can be validated on an equal footing, rather than to leverage the values of target density during training. In Fig 11 (see Appendix A.5.4 in the revision), we further showcase the capability of DeepGSB in dealing with _unknown_ target density beyond tractable Gaussian. We will include these examples in the revision, and we thank the reviewer for bringing up the topic.\n\n---\n\n**2. MFGs with distributional constraints**\n\n- Being able to solve MFGs under distributional constraints has an important application in _**modeling**_ the flow of time-marginal densities for an interactive collective (which can correspond to agents, particles, or resources). In many scientific modeling problems, the observer often has access to a collection of population snapshots that describe partially the dynamics of some complex systems composed of evolving particles, and the goal is to estimate the parametrized dynamics given certain interacting preferences between individuals, so that the learned model can be used for interpolation, forecasting, or other analysis.\n\n\n- Such setups occur in many scientific fields, for instance economics modeling given an observed wealth distribution [3], time series analysis or densities in meteorology [4,5,6], multi-task tracking [7], and, more recently, evolution of cells and RNA-sequencing in Biology [8,9]. Optimal transport models (such as SB) have become increasingly popular for these problems, as nature often prefers least-energy solutions. Recently, the setup has also become popular in the MFG domain, described as an inverse problem [10]. To give a concrete example, imagine a realistic scenario where the change of opinions due to a newly occurred event were scraped from social media, and, given these opinion snapshots recorded before and after the event, we wish to testify how different interacting mechanisms drive the evolution of opinion density during the event, and further predict future evolution. This essentially requires solving a distributional boundary constrained problem, in which the interacting mechanisms must be respected. Further, these interaction preferences, if drawn from other measurements or presented as discrete variables, may not be differentiable.\n\n\n- Our DeepGSB provides an elegant computational framework for solving the aforementioned problems, as it seeks solutions that _(1)_ satisfy population measurement at boundaries, _(2)_ respect the interacting preferences between individuals (_i.e._, $(f,F)$ in our PDE), _(3)_ handle non-differentiable preferences, _(4)_ does not require tractable target density (see discussion in **1.**), and finally _(5)_ remain scalable to higher-dimensional MFGs. As such, we believe DeepGSB can make significant steps toward a scalable numerical solver in advancing interactive population modeling.\n\n---\n\n[1] ML framework for high-dim MFG \n[2] Alternating NNs to solve high-dim MFG \n[3] Portfolio optim. w/ terminal wealth distribution \n[4] Structured Inference Networks \n[5] Data assimilation in weather forecasting \n[6] FourCastNet \n[7] Estimating ensemble flows on a HMM \n[8] Proximal OT of Population Dynamics \n[9] OT Analysis of Single-Cell Gene \n[10] inverse mean-field game inverse problems",
" **3. Discussion on scalability & limitation**\n\n- We first note that the notions of _scalability_ or _complexity_ in the area of mean-field games (MFGs), or more generally multi-agent systems, typically refer to the capability to handle _larger-scale interactions_ between agents. As the number of agents increases, these interactions become exponentially more difficult to solve, and at their limit as an MFG, require estimating the population density in _probability_ space and solving coupled PDEs as in Eq 1,7. We wish to emphasize this distinct aspect of complexity that may not appear in other ML applications.\n\n\n- Due to the aforementioned complexity, the crowd navigation MFGs, despite being in lower dimension, can be made sufficiently hard once the mean-field (_i.e._, infinite agents) interactions are introduced, and even the best existing DNN solvers [3,4] can fail to solve them reliably (see Fig 5). In contrast, the performance of our DeepGSB stays rather stable w.r.t. the variations of MFGs, including different configurations (GMM vs. V-neck vs. S-tunnel; see Fig 5), hyperparameters (e.g., diffusion $\\sigma$ (see last 3 rows in Fig 5) or horizon $T$ (see Table 6)), and parametrizations (actor-critic in Fig 5 vs. critic in Fig 7).\n\n\n- The capability of DeepGSB in solving higher-dimensional (and more complex) MFGs was further validated on opinion depolarization MFGs. To emphasize where this MFG stands in comparison to other higher-dimensional MFGs, we note that the highest-dimensional ($d$=100) MFGs reported in literature [3,4] admit unsatisfactorily simplified structures, in which _(i)_ the mean-field interaction $F$ only affects the _first 2 dimensions_, and _(ii)_ the dynamic of individual agent is unaffected by the population density $\\rho$, _i.e._ $f := f(x)$. Neither of these simplifications was adopted in our opinion depolarization MFGs, whose mean-field structure $F(x,\\rho)$ interacts across _all_ 1000 dimensions while subjecting to a polarized dynamic $f_\\text{polarize}(x,\\rho)$ through interacting with $\\rho$.\n\n\n- In additional to the limitation mentioned in L281-282, that DeepGSB is applicable only to unconstrained state spaces, the divergence terms appearing in both IPF (Eq 6) and TD (Eq 14) objectives may scale unfavorably as the dimension grows. Indeed, these operations are often approximated by the Hutchinson trace estimator [5], which exhibits high variance in high dimension. This limitation, as brought up by the reviewer, may be mitigated by replacing them with the simpler (yet equivalent) regression objectives used in DSB [2], which enjoy lower variance. This is an interesting direction that is worth further exploration. We have included these discussions in the revision (L282-286), and we thank the reviewer again for the valuable discussion.\n\n\n---\n\n**4. Other clarifications**\n\n- As the reviewer hypothesized, computing $\\widehat{\\text{TD}}_0$ indeed requires knowing the target density $\\rho_\\text{target}$. We note, however, that this is a common assumption adopted in most ML-based solvers [3,4], in which the target density is involved in the computation of terminal cost $KL(\\rho_T|\\rho_\\text{target})$. Yet, admitted that the target density may not be available for applications such as generative modeling, we stress that our DeepGSB can work _without_ $\\widehat{\\text{TD}}_0$ so long as we can sample from $X_0\\sim \\rho_0$ and $\\bar{X}_0\\sim \\rho_\\text{target}$. We refer the reviewer to Fig 10 (see Appendix A.5.4 in the revision), where we show that the DeepGSB trained without $\\widehat{\\text{TD}}_0$ and $\\text{TD}_0$ converges equally well. Finally, we stress that both prior methods [3,4] fail to converge in the absence of $\\rho_\\text{target}$.\n\n\n- Our answer to the code/instruction reproducibility in the checklist 3(a) was based primarily on the _pseudocode_ in Alg 1. However, as we strongly believe in the merits of open sourcing, we can confidently assure that we will release our code upon publication. The revised Table 6 should also make it easier for anyone to replicate our results.\n\n\n- Typos in L4, L37, L73, and L197 (previously L203) have all been corrected. We thank the reviewer for the meticulous reading!\n\n---\n\n[3] ML framework for high-dim MFG \n[4] Alternating NNs to solve high-dim MFG \n[5] A stochastic estimator of the trace ...",
" **1. Comparison to the IPF used in FBSDE [1] and DSB [2]**\n\n- The mean-field FBSDE (MF-FBSDE) systems derived in Theorem 1 differ from [1] only in _(i)_ the addition \"$+F_t$\" in $dY_t$ SDE and _(i)_ the subtraction \"$-F_t$\" in $d\\widehat{Y}_t$ SDE, where $F_t(X,\\rho)$ is the MF interaction. Hence, if one were to follow similar derivation (see Thm 4 in [1]) in an attempt to derive IPF-like objectives (admittedly they appear more like _variational ELBO_), $\\log \\rho_0 \\ge \\mathcal{L}(\\theta,\\phi) := \\mathbb{E} \\int dY^\\theta\\_t + d\\widehat{Y}\\_t^\\phi $, the two terms \"$+F_t$\" and \"$-F_t$\" will eventually cancel out with each other regardless of the parametrization ($\\theta$,$\\phi$) or the choice of $F$. We wish to emphasize its implication - that naively adopting the results from [1] will not work for this MF-FBSDE system.\n\n\n- Since the IPF objective in [1] is, as recognized by the reviewer, equivalent to the one in DSB [2], issues arising in FBSDE will also be faced in DSB. Hence, while DSB indeed provides a simpler regression IPF objective compared to [1], both methods [1,2] are insufficient to solve the MFG PDE in Eq 7 -- which is our problem of interest. Specifically, they will converge to the entropy-regularized OT solution with degenerate MF structure $F=0$. In this vein, by first decomposing the MFG PDE into its equivalent MF-FBSDE, then recognizing the underlying temporal difference (TD) structure in these FBSDEs, additional TD objectives can be introduced to encourage training to respect nontrivial $F$. Further, it naturally handles non-differentiable $F$, which can offer extra benefits in many cases.\n\n\n- As the aforementioned \"TD objective\" arises specifically from (MF-)FBSDE structures, we highlight it as a distinct benefit from FBSDE SB approaches, that is otherwise absent in [2]. However, we also note that the regression IPF objective in DSB may be used in place of the FBSDE-based IPF objective in Eq 6 to improve the computational complexity (more discussions in **3.** below). This is indeed an interesting future direction, and we thank the reviewer for bringing up the topic.\n\n\n- Admittedly, that the presentation in Sec 3.2 has caused confusion in reading (also raised by Reviewer A4H3). In the revision (L155-161) we restate the paragraph to emphasize the difference to [1] (while excluding other methods), as we mainly intend to compare to FBSDE prior works. Again, we thank the reviewer for raising these comments.\n\n\n---\n\n**2. Training procedure of DeepGSB (and FBSDE)**\n\n- The table below summarizes the additional hyperparameters adopted for training DeepGSB, including the diffusion steps, number of training iterations per IPF iteration (_i.e.,_ the $K$ in Alg 1), total IPF iterations (_i.e._, the number of cycling through 2$K$ steps in Alg 1), total training iterations, and finally the total training time measured on the same GPU machine. We note that, as suggested by [2], $K$ should be set large enough to ensure the objective is _minimized_ at each IPF iteration. Hence, as conjectured by the reviewer, $K$ cannot be too small. In practice, we find that setting $K>200$ seems sufficient to yield good results in all MFGs, and the convergence of DeepGSB stays relatively stable with respect to $K$. These missing details are included in the revision (see Table 6), and we thank the reviewer for noticing them.\n| | diffusion steps | itr/IPF | # IPF itr | Total train iterations | Total time |\n|--------------------|-----------------|---------|-----------|------------------------|--------------|\n| GMM | 100 | 250 | 40 | 20k | 22 min |\n| V-neck | 200 | 250 | 40 | 20k | 30 min |\n| S-tunnel | 300 | 500 | 30 | 30k | 50 min |\n| Opinion ($d$=1000) | 500 | 250 | 90 | 45k | 10 hr 45 min |\n\n\n- While we agree with the reviewer that the previous FBSDE method [1] appeared to use an ad hoc procedure between the first IPF and later fine-tuning, we want to emphasize that those procedures are **_NOT_** used in our DeepGSB. Specifically, the number of training iterations per IPF step ($K$) is kept _fixed_ throughout training. As such, the first IPF step performs the same training iterations as essentially all the other steps; hence our training procedure differs from [1]. In the revision, we include visualization (see Fig 9 in App A.5.4) at different training stages, showing that the DeepGSB policy typically converges smoothly over training; thus, it does not require fine-tuning either. We highlight these distinctions as a new application of SB to an equally important problem (MFG) beyond generative modeling, which may further open up new opportunities in _e.g._, DeepRL with distributional constraints.\n\n---\n\n[1] Likelihood training of SB using FBSDE \n[2] Diffusion SB",
" **3. Proof of Proposition 2**\n\n- The proof of Prop 2 is now included in Appendix A.5.1. To summarize the proof, given the parametrized forward and backward policies ($Z_t^\\theta$, $\\widehat{Z}_t^\\phi$), we can compute the single-step TD target (14a) by\n\n - Sample forward SDE (11a) with $Z_t^\\theta$. The trajectory can be compactly represented by a sequence of tuples $(X_t^\\theta, Z_t^\\theta, \\delta W_t)$ sampled on some discrete time grids.\n\n - These tuples, in conjunction with $\\widehat{Z}_t = \\widehat{Z}_t^\\phi(X_t^\\theta, t)$, can be used to calculate the incremental change of $\\delta \\widehat{Y}_t$, _i.e._ the RHS of (11c), where the stochastic term is given by $\\widehat{Z}^\\top \\delta W_t = \\widehat{Z}_\\phi(X_t,t)^\\top \\delta W_t$.\n\n - With $\\delta \\widehat{Y}_t$, we can construct the single-step TD target as in (14a). The multi-step TD can be constructed accordingly, and the derivation of (14b) also follows similarly.\n\n---\n\n**4. Other clarifications**\n\n- The \"robustness to hyperparameter\" in L247 refers to an empirical observation that the performance of our DeepGSB stays rather stable w.r.t. the variations of MFGs, including different configurations (GMM vs. V-neck vs. S-tunnel; see Fig 5), hyperparameters (e.g., diffusion $\\sigma$ (see last 3 rows in Fig 5) or horizon $T$ (see Table 6)), and parametrizations (actor-critic in Fig 5 vs. critic in Fig 7). In contrast, prior methods can be very sensitive to hyperparameters/configurations (e.g., $\\sigma$ for [2] and discretization for [6]), or rather insensitive such that the underlying MF structure may not be fully reflected (see 2nd & 3rd rows in Fig 5 for [1]). We have restated the paragraph (now in L241;marked blue) in the revision to avoid future confusion.\n\n\n- Typo has been corrected in the terminal condition of Eq (1) to $u(x,T) = G(x, \\rho(\\cdot,T))$; see L36. We also restate L64 by introducing SB directly as an \"_entropy-regularized optimal transport problem_\", and we add proper citations to prior works [1,2] in L49-52. We thank the reviewer for the meticulous reading.",
" **1. Comparison to [1,2] in runtime complexity & remarks on high-dimensional MFGs**\n\n- While Table 1 only aims to compare the highest dimension reported in related ML methods [1,2], in practice we experienced difficulties in terms of computational complexity and algorithmic sensitivity that may prevent [1,2] from scaling to higher dimension $d$. To be precise, the table below reports the per-iteration runtime (sec/itr), total training time (hour), and total training iterations on crowd navigation MFGs ($d$=2). All values are measured on the same GPU machine (TITAN RTX), and the dynamics are discretized into the same diffusion steps for fair comparisons.\n| | runtime (sec/itr) | total training time | total training iterations |\n|------|-------------------|---------------------|---------------------------|\n| [1] | 5.27 | 7.3 hr | 5k |\n| [2] | 0.08 | 0.3 hr | 17k |\n| ours | 0.07 | 0.3 hr | 20k |\n\n\n- It is clear that [1] has a prohibitively longer per-iteration runtime (75 times longer than our DeepGSB), which prevents them from scaling to higher $d$. On the other hand, while [2] admits a similar runtime as our DeepGSB, in practice their method is sensitive to MFG structure and hyperparameters. This is best evidenced in Fig 5 (see rightmost column, last 3 rows), as a small increase in diffusion from $\\sigma$=0.5 to 1 can significantly affect the convergence of their method in 2D crowd navigation; let alone for higher $d$. In contrast, our DeepGSB enjoys an efficient runtime and its convergence performance stays robust across different variations of MFGs (see Fig 5 & 7).\n\n\n- Finally, we also note that the higher-dimensional ($d$=100) MFGs considered in [1,2] admit unsatisfactorily simplified structures , where _(i)_ the mean-field interaction $F$ only affects the _first 2 dimensions_, and _(ii)_ the dynamic of individual agent is unaffected by the population density $\\rho$, _i.e._, $f := f(x)$ in Eq (7). Neither of these simplifications was adopted in our opinion depolarization MFGs, whose mean-field structure $F(x,\\rho)$ interacts across _all_ 1000 dimensions while subjecting to a polarized dynamic $f_\\text{polarize}(x,\\rho)$ via interacting with $\\rho$ (see Fig 6a and Eq 18).\n\n---\n\n**2. Comparison to baselines in Wassertein distance $\\mathcal{W}_2$**\n\n- We thank the reviewer for the suggestion. The table below reports the $\\mathcal{W}_2$ between samples generated from each method and the target distribution $\\rho_\\text{target}$ on a crowd motion MFG. We adopt the same GMM configuration except without the MF interaction, i.e., $F := 0$. As suggested by the reviewer, this MFG is solvable by _all_ methods [1,2,6,ours] after proper tuning. As can be seen, our DeepGSB has the lowest $\\mathcal{W}_2$ value, putting it closest to $\\rho_\\text{target}$. We highlight this as the benefit gained from the principle constrained optimization grounded on SB (which is absent in [1,2]) yet without discretizing the state space (as in [6]) that may in turn affect convergence accuracy.\n| | [1] | [2] | [6] | ours |\n|------------|------|------|------|------|\n| $\\mathcal{W}_2(\\rho_T,\\rho_\\text{target})$ | 4.12 | 14.8 | 2.96 | 1.94 |\n\n\n- For completeness, we also report the $\\mathcal{W}_2$ in the other 2 crowd navigation MFGs, _i.e._, V-neck and S-tunnel. We note that these 2 MFGs were already designed with large terminal penalties, so that [1,2] solve similar MFGs as in DeepGSB; hence can be compared upon similar footing. The table below shows that DeepGSB achieves lower $\\mathcal{W}_2$ values in all cases, across different MFGs and hyperparameters ($F$, $\\sigma$).\n| | baselines ([1] for V-neck, [2] for S-tunnel) | ours |\n|-------------------------|----------------------------------------------|-------|\n| V-neck (w/o $F$) | 0.23 | 0.001 |\n| V-neck (w/ $F$) | 0.31 | 0.01 |\n| S-tunnel ($\\sigma$=0.5) | 6.26 | 0.07 |\n| S-tunnel ($\\sigma$=1) | 6.24 | 0.01 |\n| S-tunnel ($\\sigma$=2) | 6.17 | 0.01 |\n\n---\n\n[1] ML framework for high-dim MFG \n[2] Alternating NNs to solve high-dim MFG \n[6] Density Control of agents",
" We thank the reviewers for their valuable comments. We are excited that the reviewers identified the novelty of our technical contributions (Reviewer ijsA, TjfA, ckTW, A4H3), appreciated the algorithmic connection to TD learning (Reviewer ijsA, TjfA, ckTW), acknowledged our superior empirical results over prior methods (Reviewer ijsA, ckTW, A4H3), and found the paper well-written (Reviewer ijsA, ckTW). We believe our DeepGSB takes a significant step toward a novel design of Schrodinger Bridge as a scalable method for solving an important class of Mean-Field Games.\n\n---\n\nAs _all_ reviewers recognized our technical novelty, the primary critics (raised by Reviewer A4H3) stemmed from the insufficient clarification on presentation at the end of Sec 3.2 (after Theorem 1), 3.3, and the missing proofs in Appendix. We agree that some of them were unclearly stated in the initial submission, which led to unnecessary confusion and could mislead readers. These were never what we intended to emphasize nor imply.\n\nIn the revision, we rewrite Sec 2, Sec 3, and App. A.2 carefully and extensively. Notable changes in the main paper are enumerated below (marked blue in the revision). We also provide **new theoretical results (Prop 3 in L185, proof in App A.5.2)** asserting the validity of DeepGSB. Additional clarifications on proofs/remarks/experiments are included in **Appendix A.5**.\n\n- The original small remark on IPF was removed. Instead, we comment, exclusively, on the suitability of the _IPF-like_ objectives in [1] to solving our FBSDE systems. (L155-161)\n\n- Solvability of general IPF/Sinkhorn to the MFG-PDE is mentioned in Sec 3.1 (footnote 4), with a reference pointing to App. A.5 for more details. Admitted that we're not allowed to add new page during rebuttal, we aim to move these discussions to main paper once permitted.\n\n- Relation between Eq 6 & KL is discussed extensively in Lemma 5, Lemma 6, and Prop 7 (App. A.2, L524-546). As this is an important concept that will help in understanding Alg 1, we provide a few comments at the end of Sec 2 (L106-108) and Footnote 4 (L135) after proper notations or related contents are introduced.\n\n- Sec 3.3 (Design of computational framework) has been substantially revised, where the logic flow now follows:\n - We first explain the difficulties encountered when adopting prior training pipelines from from [1]. (L151-161)\n - Motivated by these insights, we draw new connections to TD learning and then introduce proper TD objectives for our problems. (L162-182)\n - Next, we establish _new_ theoretical results for our proposed objective, showing that its minimizer provides necessary and sufficient conditions for the FBSDE systems in our Theorem 1. This is a nontrivial result asserting the validity of our DeepGSB. (L183-188. Proof is left to App. A.5.2)\n - Finally, we explain (briefly) the difference between the resulting algorithm (Alg 1) and standard IPF, and draw connections to DeepRL methods, which provide initial remarks for convergence analysis (L188-192 & App A.5.3).\n\nWe hope the new presentation is clearer in conveying how our method is motivated and constructed. We try our best to resolve all raised concerns in the individual responses below. We sincerely hope Reviewer A4H3 will reconsider the rating and re-evaluate at an entirety.\n\n---\n\n[1] Likelihood training of SB using FBSDE",
" This article proposes *Deep Schrödinger Bridge (DeepGSB)*, a numerical algorithm for solving large-scale (in state dimension) stochastic Mean Field Games (MFG) with hard distributional target constraint, continuous state space and flexible mean-field (MF) interaction function. The authors adapted the use of Hopf-Cole transformation and Schrödinger factors, similar to in [1], to transform MFG PDEs (with hard target) to a system of Schrödinger Bridge (SB) PDEs that depend on an MF interaction term $F$. In order to solve this system, the authors extended the Forward-Backward Stochastic Differential Equation (FBSDE) representation for (SB) from [2] to account for $F$. DeepGSB solves this FBSDE by fitting neural network approximators in an alternating manner, similar to in [2] and past works, with an added TD-like loss function to account for $F$. Experiments are performed for a crowd-navigation use case, similar to in [3, 4], and an opinion depolarization use case (based on the party model [5]).\n\n[1] Wasserstein Proximal Algorithms for the Schrödinger Bridge Problem: Density Control with Nonlinear Drift. K.F. Caluya, A. Halder.\n\n[2] Likelihood Training of Schrödinger Bridge using Forward-Backward SDEs Theory. T. Chen, G. Liu, E. Theodorou.\n\n[3] A machine learning framework for solving high-dimensional mean field game and mean field control problems. Ruthotto, et. al.\n\n[4] Alternating the Population and Control Neural Networks to Solve High-Dimensional Stochastic Mean-Field Games. Lin, et. al.\n\n[5] Polarization in geometric opinion dynamics. Gaitonde, et. al. Strengths\n- A neural-based numerical solver for MFGs with exact terminal distribution and flexible mean field interaction can be highly useful and desirable in practice. In the crowd motion use case, the method is shown to respect the obstacles (e.g., flexible MF interaction) without losing convergence to the target, implying that in a real-world scenario, the agents are moving safely until they finally reach their intended destination. Present methods are only able to solve for a smaller class of MFGs.\n- In addition to exactness and flexibility, the method is empirically shown to be scalable by solving a high-dimensional ($d=1000$) instance. Having surveyed related works, as far as I can tell, this seems to be the largest documented to date.\n- The algorithm is as simple as/simpler than APAC-Net [1] and [2], yet it solves a wider class of MFGs. Setting the TD terms as regressand to allow account for $F$ is a principled way to handle a flexible of choice of $F$. As the paper states, this can be generalized further to make use of other computational tools in reinforcement learning (RL). As it stands, the design choices make use of best practices in RL, such as replay buffers and using multi-step TDs.\n- The use case of opinion depolarization by adopting the party model in an MFG seems to be novel and it seems very realistic to have a high-dimensional opinion space. This could be a good test bed for future (and past) numerical MFG solvers.\n- To obtain an FBDSE system for an MFG, the authors extended and proceeded very similarly to an established result in [3] (Theorem 3 and its proof) originally intended for generative models. This reuse unlocks a numerical solver for a wide class of MFGs.\n\nWeaknesses\n- The ML-based methods in [1] and [2] seem to be designed to solve high-dimensional MFGs. This paper seems to indicate that they are not able solve for $d=1000$ but it's not clear why (too much computational resource needed?). It might be informative to compare the computational complexity and/or running time of the methods (instead).\n- The direct comparisons against present methods are done visually (Figure 5 and 7, in the appendix). It can be more rigorous to compare the Wasserstein distance to target of the proposed method against that of the present methods. I see that it can be hard to put all the methods on an equal footing, as the proposed method solves a wider class of MFGs. It would be a fair comparison though if we restrict to a class of MFGs that are solvable by all the methods -- it seems that crowd motion is a use case that is also showcased in [1,2]. A dynamical optimal transport problem can also be a good test bed, as per Example 1 in [1].\n- There are some clarity issues -- while the paper is mostly well-written, it would be beneficial for non-expert readers to make some changes:\n1. (Line 64) I don't think it's correct to write that SB is an emerging machine learning *model*, it might be more accurate to introduce the topic by stating what the SB problem is.\n2. In equation 1, what is the domainof $G$? Shouldn't it be $\\mathbb{R}^d \\times \\mathcal{P}(\\mathbb{R}^d)$? If so, then the value function should be defined as: $u(x,T)=G(x,\\rho(\\cdot,T))$.\n3. (Line 49-52) As the authors discuss the drawbacks of present methods, it would be good to include citations for every drawback (which method has which drawback).\n4. Proposition 2 does not seem to be proven. Please provide at least a proof-sketch -- it is not obvious to me how exactly the random terms in equations 14(a,b) are dealt with.\n5. There are grammatical mistakes, but they do not seem to impact my understanding. It would be beneficial if the authors were to edit the article.\n6. It is not clear to me why DeepGSB is more robust to hyperparameters (Line 247). Can you please clarify via a theoretical and/or empirical argument?\n\n[1] Alternating the Population and Control Neural Networks to Solve High-Dimensional Stochastic Mean-Field Games. Lin, et. al.\n\n[2] A machine learning framework for solving high-dimensional mean field game and mean field control problems. Ruthotto, et. al.\n\n[3] Likelihood Training of Schrödinger Bridge using Forward-Backward SDEs Theory. T. Chen, G. Liu, E. Theodorou.\n Every item under \"Weaknesses\" contains a question and/or suggestion that can be answered/addressed. Adequately addressed",
" The authors apply diffusion Schrodinger Bridges (DSB) to mean field game applications, this generalizes existing DSB approaches to include mean field interactions. The authors utilise a novel loss term based on temporal difference objectives within each IPF-like iteration.\n\nThe experiments are validated on 2D crowd navigation and opinion depolarisation experiments. Strengths\n- This is an interesting paper and connects DSB beyond generative modelling to interesting mean field game tasks.\n- The contributions are simple but effective, adding a mean field term to SB and deriving a new loss based on temporal difference\n\nWeaknesses\n- Line 159 is not very clear, the objective of FB-SDE is only a log-likelihood at convergence and appears to only be justified by its equivalence to IPF \n- Is the statement that IPF methods will not work with MF interaction true for the the regression objective of [1]?, between line 164 and 165\n- Experiments appear relatively simple, does the method scale to more complex examples?\n- It does not appear that the code is available, however the authors have stated that the code is available in the checklist. Have I missed the link?\n- Some experimental details such as number of IPF iterations, diffusion steps, training details are missing.\n- Not clear how to tune training procedure in FBSDE approach. \n - If number of training steps per IPF iteration, denoted K in Algo 1, is very small then this will just map noise to noise per IPF iteration hence not solve SB and surely cannot work?\n - If like in the original FBSDE SB paper [2] the forward/ backward networks are trained for a large number of training iterations in the first IPF step then fine-tuned by a small number of steps in later IPF steps, do they get stuck in local minima? is fine tuning doing much or is this essentially the same as score based generative modelling? If fine tuning and not following IPF by training each network to completion at each IPF step, is this actually approximating Optimal Transport? There appears to be no empirical evidence this is close to OT\n \nMinor\n- Line 4 typo, \"preferences needs not available\" does not make sense\n- Capital \"L\" needed for Laplacian operator, line 37\n- Typo line 73, \"need not be continuous\" is fine\n- Line 203, \"testify\" seems to be the wrong word here\n\n[1] Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling, Bortoli 2021\n[2] LIKELIHOOD TRAINING OF SCHRÖDINGER BRIDGE USING FORWARD-BACKWARD SDES THEORY, 2022 - Is the code available?\n- Is there a reason to use the FB-SDE SB approach over the simpler original DSB approach given they have the same objective?\n- How many IPF iterations were used? And how many training steps, K, per IPF iteration?\n- What is the run time/ training time?\n- How is $\\hat{TD}_0$ computed if it requires the log of the target density?, line 182\n The authors have not discussed limitations. Given the iterative nature of the procedure, similar to other SB approaches, it appears difficult to scale. Experiments are relatively toy-ish.",
" This paper proposed the DeepGSB, a framework for solving (a variant of) mean-field games including non-differentiable interaction terms. The key idea is to replace the usual terminal condition on the control by a hard constraint on the target distribution, and then invoke the Schrodinger Bridge (SB) framework.The authors provide promising empirical evidence for the proposed method. Strength:\n\n- This paper is well-written, with ideas clearly presented.\n- The proposed model is novel and interesting.\n- The connection to TD-like objectives is intuitive and may potentially be useful in practice.\n\n\nWeakness:\n- The motivation for replacing the terminal control constraint with the hard distributional constraint is not very convincing to me. Under what circumstance does such a scenario arise? To me, it seems like for most applications, the target distribution is almost never available. In the experimental section, the authors simply assume that the target density is some simple distributions (such as Gaussians), for which I fail to grasp the reason why this is reasonable. My two major questions are:\n\n1. Is there any chance to relax the hard constraint on the target density? For me, this seems quite hard as the SB is mostly expressed in the two fixed end-points formulas.\n\n2. Is there any practical motivation for considering the hard distributional constraint formulations of MFGs? As mentioned in my review, although this paper aims at solving MFGs, its proposed variant does not seem particularly interesting to me (although the technique is elegant). I'm willing to raise my score if the authors can convince me of the value of their setting.",
" The authors of this work propose and explore extensively a unique and elegant generalization of SBPs to Mckean-Vlasov type of SDEs / MFGs. Whilst these types of systems have already been proposed and explored in the literature, the connection to SBPs and the reinterpretation as a generalization is novel and elegant. One of the contributions of this work is that leveraging some new interpretations the authors propose new ML aided numerical algorithms to solve these systems and provide extensive experimental evidence showing their success and some initial theoretical motivation. Overall there are issues issues in the way the paper is written and how some previous ideas / methods are underplayed (and arguably innacurately portrayed). I believe this was not the author's intention as they merely seeked to showcase / emphasize their contribution however I do believe it has a negative rather than a positive effect when reading the paper with the potential of confusing the reader. Furthermore there is a gap connecting the practical optimisation scheme used to the central theoretical results in the paper (main), whilst the connection may be in the appendix or possible to infer it is not clearly outlined. \n\nPros: \n1. Sufficient and Extensive successful experimental criteria\n2. Clear and accessible lit review of SBPs and MFGs\n3. Cleanly presented and readable results/theorems \n\nCons:\n1. There is a gap between Theorem 1 and the training algorithm which makes it difficult for a reader to motivate the origin of the algorithm\n2. Lack of clarity when connecting (or arguing significant differences) to IPF\n3. Overall arguments in the paper are not well connected making it difficult for a reader to assert the validity of the final approach in particular bridging the theory and the practice.\n4. As a result there seems to be no convergence proofs or guarantees for algorithm 1 , its somewhat heuristically motivated by IPF and Theorem 1 but there is no overarching argument establishing / justifying its convergence. \n Questions / Corrections (suggestions):\n\n1. The claim that IPF in all generality does not suffice based on a naive marginal based iteration of IPF specifically in termed “likelihood training objectives” proposed by [27] seem like a major overstatement and not a rigorous one either. At the very best the authors should state that the IPF flavor proposed in [27] is not suitable for the generalized SB since that is what they motivate. In the most general setting as a coordinate ascent algorithm IPF and Sinkhorn probably still have theoretical guarantees and could potentially admit practical iterations for this problem, the authors have not proved otherwise and should not state otherwise. It's not helpful for the reader to inflate claims in this manner, especially without proof (you do provide an argument but that proves something much more specific). This is my main counterpoint. If the authors are happy to clarify and state this more humbly then I believe we have an excellent paper (there is no need for this really … ). \n2. Despite making criticisms towards a specific instance of IPF (which is only nicely satisfied / justified when Y,Y_hat are at equilibrium rather than approximately parameterized) the final form of the proposed algorithm is very IPF-esq, and whilst the differences are outlined as they are developed it feels they are not clearly emphasized enough. And as mentioned in the previous point its not necessarily proved that the alternating objective provided can not be re-written or re-interpreted as alternating path KLs with constraints. There is a gap here in the comparison or relationship between the schemes and the small paragraph claiming IPF is not applicable falls very short. \n3. Line 140 the notation is not particularly nice. I would advise subscripting Y and Y_hat with time indices as done in [27] and in the Appendix.\n4. App A.2 Eq 26, whilst their are prior results which have not being acknowledged (eg [2] Equation 16 and Theorem 5 or [3] Theorem 3) trading off scores with divs in the context of diffusions, it is not clear how the cross terms involving the score q vanish after applying Green’s identity, its clear how divergences arise on Z_theta, but terms involving q and Z_theta/phi typically would require further identities/steps (i.e. using the FPK equation) to substitute div(f) , this is missing a lot of clarity and steps. Whilst I know this result to be true I find the sketch incomplete and very uninformative. \n5. FBSDE Theory asserts conditions at which some particular system of Ito Processes satisfies certain HJB equations. Typically reaching said conditions numerically (see [1]) is achieved by minimizing some MSE styled error whose fixed point trivially matches the conditions in Theorem 1 , the authors fail to detail how they go from Theorem 1 to the objectives L(theta) and L(phi).\n6. Overall this paper is lacking an additional proposition which proves that a combined objective function L(\\phi, \\theta) is minimized when the conditions of theorem 1 are satisfied, whilst this is informally discussed and partially derived/motivated it is not formally stated or argued anywhere. Then after this a further lemma or remark asserting how a coordinate ascent scheme would lead from this objective to a fixed point satisfying the conditions in Theorem 1 would substantially improve the presentation and results. Going through the appendix it does seem that this can be achieved without too much additional new work but with substantial change in the write up. It is not the job of the reviewer/reader to infer (and basically have to prove) this from partial information and semi-connected statements. \n7. To maybe emphasize further the previous point there are no convergence guarantees proved for algorithm 1 the way its sketched / presented comes across as mostly heuristic with some components derived but without any overall guarantee presented.\n\n\n\n[1] Nüsken, N. and Richter, L., 2021. Solving high-dimensional Hamilton–Jacobi–Bellman PDEs using neural networks: perspectives from the theory of controlled diffusions and measures on path space. Partial Differential Equations and Applications, 2(4), pp.1-48.\n\n[2] Huang, C.W., Lim, J.H. and Courville, A.C., 2021. A variational perspective on diffusion-based generative models and score matching. Advances in Neural Information Processing Systems, 34, pp.22863-22876.\n\n[3] Song, Y., Durkan, C., Murray, I. and Ermon, S., 2021. Maximum likelihood training of score-based diffusion models. Advances in Neural Information Processing Systems, 34, pp.1415-1428.\n\n[27] Chen, T., Liu, G.H. and Theodorou, E.A., 2021. Likelihood Training of Schr\\\" odinger Bridge using Forward-Backward SDEs Theory. arXiv preprint arXiv:2110.11291.\n As detailed in the suggestions / questions the paper contains incomplete proofs, and disconnected statements. Overall the text is not well connected and the reviewer feels it is not well written for a machine learning auidence, even one specialising in the relevant subfields covered in this work. Thus due to the lack of clarity and verifiability I cannot recommend this work for publication in its current form, it feels that the author expects readers to complete and guess many intermediate and missing steps in proofs to the point where in some cases the overall desired statements you would expect for a new numerical scheme are not actually proved or even stated. Overall the way in which the theory is presented does not prove/outline overall convergence guarantees for the final numerical scheme, the paper requires significant re-writing and its overall presentation in order to be clear/accesible. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"gBTXQUzSeS1",
"gIJudO1oveOM",
"7xqRbkq4hE",
"pq9PwlJYopu",
"uhEmeAozQbH",
"ioWyiu1yePz",
"iv6LpODV-h",
"236QopZ6zHm",
"236QopZ6zHm",
"QqluvKeGJfDt",
"xNPEYkus7s5",
"5IS0Q1Td2qi",
"hWkBAkB7eew",
"3iHrvuSBbV7y",
"fdCEu3U3JMQ",
"IvGjllPnU4",
"xPs93ck2PKY",
"nips_2022_fp33Nsh0O5",
"nips_2022_fp33Nsh0O5",
"nips_2022_fp33Nsh0O5",
"nips_2022_fp33Nsh0O5",
"nips_2022_fp33Nsh0O5"
] |
nips_2022_espX_4CLr46 | Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources | Extraction of latent sources of complex stimuli is critical for making sense of the world. While the brain solves this blind source separation (BSS) problem continuously, its algorithms remain unknown. Previous work on biologically-plausible BSS algorithms assumed that observed signals are linear mixtures of statistically independent or uncorrelated sources, limiting the domain of applicability of these algorithms. To overcome this limitation, we propose novel biologically-plausible neural networks for the blind separation of potentially dependent/correlated sources. Differing from previous work, we assume some general geometric, not statistical, conditions on the source vectors allowing separation of potentially dependent/correlated sources. Concretely, we assume that the source vectors are sufficiently scattered in their domains which can be described by certain polytopes. Then, we consider recovery of these sources by the Det-Max criterion, which maximizes the determinant of the output correlation matrix to enforce a similar spread for the source estimates. Starting from this normative principle, and using a weighted similarity matching approach that enables arbitrary linear transformations adaptable by local learning rules, we derive two-layer biologically-plausible neural network algorithms that can separate mixtures into sources coming from a variety of source domains. We demonstrate that our algorithms outperform other biologically-plausible BSS algorithms on correlated source separation problems. | Accept | This paper presents a method for blind separation of correlated sources, which is a challenging task. Applying the weight similarity matching approach to the Det-Max optimization, the authors develop a biologically-plausible two-layered neural network that can separate correlated sources from their linear mixture. All of reviewers agree that the paper is well written and has a solid contribution in BSS. The approach in this paper is a general framework that applies to various source domains. A downside is in experiments on real-world data, which has been improved during the author rebuttal period. Therefore, I am pleased to suggest the paper to be accepted.
| train | [
"wYmH7uFjksk",
"7V8_f7Sq2mQ",
"zFYJ_t85Pl",
"yD_v2X7B8RW",
"Rkt2LG-MDY",
"61N0b2AT2X",
"RCE7SR3dbr8",
"-QyJls9z8bx",
"TNrUMwo_pfZ",
"NdNNupLuiWD",
"x-a9saNKa8",
"NCT8otgP_OZ",
"JU57hLkDIAO",
"yEnR1YJEAYV",
"eIxESI6cv5t",
"rE-G4rASliQ",
"S6kPhx3_UB",
"YDjWuJaG8vz",
"6Nq9meKaBg",
"sJ6r3hLnie9",
"8azOK4QpBcA",
"j-Wrs7f8E6"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank you for your useful feedback and suggestions.",
" We would like to thank you for your useful feedback and suggestions.",
" We would like to thank you for your useful feedback and suggestions.",
" We would like to thank you for the useful feedback and suggestions. In the final version of our article, we will extend hyperparameter sensitivity analysis in the appendix section. As one of the near future extensions of our framework, we will consider applications involving the use of real data.",
" The authors have adequately answered my comments, I have raised my rating accordingly.",
" My comment has been addressed here. I upgraded my grade.",
" Thanks for your answers. \n1. Hyper-parameter selection. \nThanks for this new experiment. It shows the sensitivity of the experiment to the parameter $\\lambda_{SM}$ and shows that the Gain initialization seems to have a limited impact. However, a lot of hyper-parameters are still fixed and for instance the rule where the Euclidean norm of all rows of W is normalized to 0.0033 seems rather arbitrary. My take in general is that \"trial and errors\" with many hyper-parameters is not desirable because it makes the method much longer to train if a proper cross validation is performed and otherwise there is a risk of over-fitting the test set.\n2. Experiments on real data\nI strongly disagree that mixing images with random mixing matrices with entries sampled from a Gaussian distribution is real data. In this experiment, your model perfectly holds so you don't test robustness with respect to model mis-specification. For instance, in real applications, you might have additional noise that has nothing to do with the sources you are trying to recover or mixing matrices could be generated from any distribution.\nThe same thing holds for your additional experiments where again, the mixing is fully artificial. In a real data experiment, you do not have access to how the data are mixed (think of a recording of several voices in a room, there can be reverberations, time delays and so many other disturbance that makes the model hold only partially).",
" I want to thank the authors for addressing all my comments. I think that the added results are very interesting and highlight the online nature of the algorithm. I think they are a very valuable addition to the paper. Also appreciate that the efforts made an effort to improve the legibility of the paper more by stating the contributions more clearly. Given the author's response and the other reviews, I see no reason to change my ratings. ",
" I have satisfied with the response and changes made by the authors. I keep my positive review. ",
" We thank you for your time and your comments. Below we present our response to your comments and questions.\n\n>Disclaimer by Reviewer: Due to my lack of expertise on the topic, I am not able to assess the soundness of the mathematical model nor the correctness of the theorem proof. I have also not read carefully the very long appendix (31 pages, 17 figures). Due to the length of the full manuscript, a journal might be a better publishing venue to benefit from more extensive reviews.\n\nWe appreciate your recommendation. We respectfully think that our article is a good fit for Neurips, which has published many articles in recent years on the topic of biologically-plausible neural networks [i,ii,iii], and many papers with similar or longer lengths. \n\n[i]. Bahroun Y, Chklovskii D, Sengupta A. A Normative and Biologically Plausible Algorithm for Independent Component Analysis. Advances in Neural Information Processing Systems. 2021 Dec 6;34:7368-84.\n\n[ii]. Pogodin R, Mehta Y, Lillicrap T, Latham PE. Towards biologically plausible convolutional networks. Advances in Neural Information Processing Systems. 2021 Dec 6;34:13924-36.\n\n[iii]. Tyulmankov D, Fang C, Vadaparty A, Yang GR. Biological learning in key-value memory networks. Advances in Neural Information Processing Systems. 2021 Dec 6;34:22247-58.\n\n>Clarity: The paper is clearly written, but the relation to prior work is somewhat limited. A number of existing methods are listed, but the difference with the proposed method is not clearly explained. The paper would benefit from being more pedagogical about its different original contributions.\n\nWe thank you for the positive feedback on the presentation of our article. Section 1.1 (Other related work) of our paper provides background on relevant existing methods and describes contributions of our article in relation to them. Following your suggestion, in the revised article, before Section 1.1, we added a list of the main contributions of the article:\n\n \"In summary, our main contributions in this article are the following:\n\n* We propose a normative framework for generating biologically plausible neural networks that are capable of separating correlated sources from their mixtures by deriving them from a Det-Max objective function subject to source domain constraints.\n* Our framework can handle infinitely many source types by exploiting their source domain topology.\n* We demonstrate the performance of our networks in simulations with synthetic and realistic data.\n\"\n\n\nTo provide more clarification,\n\nMain novelty : Our article proposes a general framework for constructing biologically plausible neural networks that are capable of separating correlated sources based on information about their source domains. This framework utilizes the Det-Max criterion for correlated source separation capability and two-layer weighted similarity matching to construct biologically plausible neural networks capable of implementing arbitrary linear transformations.\n\nComparison with existing approaches:\n\n* There are BSS frameworks that are capable of separating correlated sources using the Det-Max criterion and source domain information such as SSMF, PMF and LD-InfoMax. However, the algorithms for these frameworks are not online, but batch algorithms. In fact, the proposed framework enables biologically plausible neural network-based (online) implementation of these frameworks.\n* There are biologically plausible neural network-based BSS solvers derived using the similarity matching criterion as Nonnegative Similarity Matching (NSM) and Bounded Similarity Matching (BSM). However, these approaches cannot handle correlated sources. BSM employs a single-layer WSM, which is unable to implement arbitrary linear transformations. Furthermore, they are specific to only two source domains: nonnegative orthant for NSM and hypercube ($\\ell_\\infty$-norm ball) for BSM. The proposed framework is capable of handling infinitely many source domains including the unit simplex and infinitely many polytopes corresponding to different source domain characteristics. ",
" >Quality: The numerical experiments to demonstrate the effectiveness of the model are quite limited. It would have been interesting to compare the proposed method with more methods.\n\n>Question 2: Figure 3/4 only compare the proposed method with ICA and NSM, despite a larger number of related works listed in section 1.1: NMF, SSMF, SCA, BCA, PMF, BSM. Why not comparing with these other methods? Did you try any experiments with real data ?\n\nFollowing your suggestion (and the suggestions of other reviewers), in the revised version of our manuscript, we included comparisons with Polytopic Matrix Factorization (PMF) [25] and Log-Det Mutual Information Maximization (LD-InfoMax) [58] for nonnegative antisparse source separation (Section 6.1), antisparse source separation (Section E.2), image separation (Section E.3), and sparse source separation (Section E.4) experiments. Both PMF and LD-InfoMax use Det-Max criterion in Problem (3) to solve blind source separation problem for potentially correlated sources.\nThese frameworks consider the off-line version of the separation scenario we discussed in Sections 2.2 and 2.3, and propose batch algorithms for its solution.\n\n\nPolytopic Matrix Factorization is based on the following optimization problem:\n$$\\text{maximize } \\det(\\boldsymbol{Y}(t) \\boldsymbol{Y}(t)^T) \\text{ subject to } \\boldsymbol{X}(t) = \\boldsymbol{H Y}(t), \\text{ and } \\boldsymbol{Y}(t)_{:,j} \\in \\mathcal{P} \\ \\ \\forall j \\ \\ (R1),$$\nwhere $\\boldsymbol{H}$ corresponds to the mixing matrix, $\\boldsymbol{Y}(t)$ corresponds to the source estimates.\n\n\nLD-InfoMax [58] can be considered as a statistical interpretation of PMF approach, and it has the following optimization setting:\n$$\\text{maximize } \\frac{1}{2}\\log\\det(\\boldsymbol{\\hat{R}\\_y} + \\epsilon \\boldsymbol{I} ) - \\frac{1}{2}\\log\\det(\\boldsymbol{\\hat{R}\\_y} - \\boldsymbol{\\hat{R}\\_{yx}}(\\epsilon \\boldsymbol{I} + \\boldsymbol{\\hat{R}\\_{x}})^{-1} \\boldsymbol{\\hat{R}\\_{yx}}^T+ \\epsilon \\boldsymbol{I} ) \\text{ subject to } \\boldsymbol{Y}(t)_{:,j} \\in \\mathcal{P} \\ \\ \\forall j \\ \\ (R2),$$\nwhere $\\boldsymbol{\\hat{R}\\_{y}}$ and $\\boldsymbol{\\hat{R}\\_{yx}}$ correspond to the sample covariance matrix of the source estimates $\\boldsymbol{Y}(t)$ and the sample cross-covariance matrix between source estimates and mixtures, respectively.\n\nWe note that both the PMF algorithm in [25] and the LD-InfoMax algorithm in [58] are batch / off-line algorithms. In other words, at each iteration, these algorithms have access to all input data. Due to their batch nature, these algorithms typically achieved better performance results than our neural networks with online-restriction, as expected. We included the results of these new experiments in [Figure 3](https://figshare.com/s/07ebb57d19b77008e12c) (nonnegative antisparse case), [Figure 13](https://figshare.com/s/489eeb186c3d302630b1) (antisparse case) and Table 2 (sparse case). Moreover, we added Section E.1 in our revised manuscript for a brief discussion of these batch algorithms.\n\nRegarding the use of real data: In our article, we performed two experiments with natural images. The experiment in Section 6.2 uses real natural images that are mixed using a random mixing matrix. These images constitute nice examples of naturally correlated sources. Furthermore, the sparse dictionary learning example in Appendix E.6 is also based on the patches obtained from real images. Furthermore, this example forms a nice example for the computational modeling of the primary visual cortex through the proposed biologically plausible neural network based on the $\\ell_1$-norm-ball polytope. Also, as an additional example, we provided a digital communication scenario in Section E.8 in our revised manuscript. Sources in digital communication are members of discrete sets, referred to as constellations, making the use of $\\ell_\\infty$-norm-ball polytope a perfect modeling assumption. Furthermore, random mixing matrices with identically distributed independent normal entries are common, and in many cases accurate models of the actual wireless propagation environment [i]. Therefore, simulations involving a wireless digital communication scenario using discrete sources and random matrix mixing would be highly realistic. We presented the simulation results and demonstrate that the proposed neural network successfully handles multi-user separation task at a\nreceiver. \n\n[i] Jakes WC, Cox DC, editors. Microwave mobile communications. Wiley-IEEE press; 1994 Sep 1.",
" \n>Question 1: What makes the proposed method well suited from correlated sources?\n\nCorrelated source separation capability is enabled by two factors: Using information about special source domain (such as unit simplex or identifiable polytope membership) and employing Det-Max criterion which assumes and exploits spreading of source samples inside the presumed domain. The use of these two factors is sufficient for source separation, eliminating the need for statistical assumptions such as the independence or uncorrelatedness of sources (see the end of Section 2.1, Section 3 and Appendix A for details).\n\n>Limitations: The paper stresses the computational cost of the proposed method, but does not give concrete examples. What is the computational cost of the proposed method (including hyperparameter tuning) in the two proposed numerical experiments? What is the computational cost of ICA and NSM?\n\nThank you for incentivizing us to clarify our comment on the computational complexity. To address this issue, we included a section in the supplementary (Appendix F) on the computational complexity of the proposed Det-Max WSM neural networks. In this section, we derive the dimension (input, output) dependence for the per sample computational complexity of simulating these neural networks, which is given by $\\mathcal{O}(\\tau_{\\text{max}} mn)$. Here, $n$ is the number of sources, $m$ is the number of mixtures and $\\tau_{\\text{max}}$ is the number of iterations required to obtain each output vector. We make comparisons with NSM and BSM, which are also biologically plausible neural networks for the BSS problem. We find that, because of their similar recurrent dynamics, their computational complexity are nearly the same except for constant scalings. To summarize, the recurrent neural networks have very similar dimension dependence. However, our proposed networks have two coupled layers which results in the increased scaling factor.\n\nWe note that the aforementioned complexity issue concerns only the digital computer based simulation of these neural networks. The major source of the complexity is on obtaining iterative numerical solutions of the differential equations in (9)-(12). In the analog/neuromorphic implementations, these differential equations are naturally solved by the physical circuit, without any computational load.\n\n\n\n",
" We thank you for your time and for the general positive assessment of our article. We address your comments and questions in the revised article and in our response below. \n\n>Weaknesses 1: One is that the paper claims that the resulting algorithm is only, but only presents these results in the appendix. I would have appreciated that some space of the main paper be allocated to that as it is rather central to the paper.\n\n>Question: Can you propose a concise presentation of the online result showing that your model can operate in that setting?\n\nThank you for this comment. We would like to clarify the coverage of the online aspect of our framework in the main part of the article:\n\n* We provide the generic online optimization setting in Section 5.1 of the article.\n* Only the gradient expressions corresponding to the objective function are provided in the Appendix.\n* We illustrate the derivation of the online algorithm for a special source case, i.e., antisparse sources represented by the $\\ell_\\infty$-norm-ball domain, in Section 5.2.\n* Online algorithm is defined as a gradient search, which is implemented in the form of the output dynamics in (9)-(12) and learning updates in (13)-(15). All of these equations correspond to the gradient search for the online optimization setting in Section 5.1. \n* All these components of the online algorithm define our neural network structure, which is illustrated in Figure 2.1.\n\n In summary, we provide the derivation of a neural network determined by the gradient search-based online algorithm for the special source domain of anti-sparse sources in the main part of our article. Other example source domains are provided in the supplementary part.",
" >Weaknesses 2: The second one is related to the context of the problem. The paper is at the interface of signal processing, machine learning, and neuroscience, and it is a bit much to ask the reader to be well versed in all the different BSS problems covered in the paper. A bit more context for each of the problems would help understand the importance of each of these problems and why building such biologically plausible would be useful. Are natural data mixed or present in the form presented in this paper?\n\nWe thank the reviewer for this question. The main motivation for this work comes from the fact that blind source separation may be implemented throughout the brain. For example, seminal work argued that Gabor receptive\n fields of V1 neurons may be the result of performing BSS on natural images [1, 2], and receptive fields in the auditory system may be the result of performing BSS on natural sounds [4]. In addition, there may be general circuit motifs in the brain for solving BSS. For example, in a seminal experiment, auditory cortex neurons acquired V1-like receptive\nfields when visual inputs were redirected there in a ferret [14], suggesting that auditory and visual cortices may be implementing similar learning algorithms. Many previous works cited in our paper were motivated by these points and developed biologically plausible BSS algorithms.\n\nWe modified to the first paragraph of our introduction to address these points:\n\"Our brains constantly and effortlessly extract latent causes, or sources, of complex visual, auditory or olfactory stimuli sensed by sensory organs [1-11]. This extraction is mostly done without any instruction, in an unsupervised manner, making the process an instance of the blind source separation (BSS) problem [12-13]. Indeed, visual and auditory cortical receptive fields were argued to be the result of performing BSS on natural images [1-2] and sounds [4]. The wide-spread use of BSS in the brain suggests the existence of generic circuit motifs that perform this task [14]. Consequently, the literature on biologically-plausible neural network algorithms for BSS is growing [15-19].\"\n\nThe main contribution of this paper is the consideration of correlated sources, which have not been addressed previously. To solve these correlated source separation problems, we make geometric assumptions about the sources. These assumptions naturally map to biological sources. For example, boundedness is a reasonable assumption for natural sources. Nonnegativity is also natural; in an olfactory mixture, odorants are either there or not. Sparseness is a feature of wavelet-like representations of natural scenes [2]. Antisparseness is motivated by dense or democratic representations [i],[ii], which might be suitable for some internal or sensory representations.\n\nWe modified the sentence in line 87 to \"The use of $\\ell_1$-norm as a convex (non)sparsity measure has been quite successful with various applications including sparse dictionary learning/component analysis [29, 38, 40, 48, 49] and modeling of V1 receptive fields [2].\" and the sentence in line 97 to \"Nonnegativity of sources naturally arises in biological context, for example in demixing olfactory mixtures [53].\"\n\n[i] Studer C, Goldstein T, Yin W, Baraniuk RG. Democratic representations. arXiv preprint arXiv:1401.3420. 2014 Jan 15.\n\n[ii] Studer C, Yin W, Baraniuk RG. Signal representations with minimum ℓ∞-norm. In2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2012 Oct 1 (pp. 1270-1277). IEEE.\n\nAnother motivation for our work is to link network structure with function. This is a long standing goal of neuroscience, however examples where this link can be achieved are limited. Our work provides concrete examples where clear links between a network's architecture--i.e. number of interneurons, connections between interneurons and output neurons, nonlinearities (frequency-current curves)-- and its function, the type of source separation or feature extraction problem the networks solves, can be established. These links may provide insights and interpretations that might generalize to real biological circuits.\n\nUnfortunately, space limitations do not allow us to further expand our discussion. If we get an extra page in the final submission, we will add more on this topic. ",
" \n> Weaknesses 3: Finally, the work mainly compares to nonnegative similarity matching and infomax, but it could be interesting to see how the model compares to existing algorithms designed to solve the problem at hand, not only biologically inspired ones.\n\n> Question 3: Can you compare your model to existing algorithms that were designed to solve it, and not only bio-inspired ones?\n\nFollowing your suggestion (and the suggestions of other reviewers), in the revised version of our manuscript, we included comparisons with Polytopic Matrix Factorization (PMF) [25] and Log-Det Mutual Information Maximization (LD-InfoMax) [58] for nonnegative antisparse source separation (Section 6.1), antisparse source separation (Section E.2), image separation (Section E.3), and sparse source separation (Section E.4) experiments. Both PMF and LD-InfoMax use Det-Max criterion in Problem (3) to solve blind source separation problem for potentially correlated sources.\nThese frameworks consider the off-line version of the separation scenario we discussed in Sections 2.2 and 2.3, and propose batch algorithms for its solution.\n\n\nPolytopic Matrix Factorization is based on the following optimization problem:\n$$\\text{maximize } \\det(\\boldsymbol{Y}(t) \\boldsymbol{Y}(t)^T) \\text{ subject to } \\boldsymbol{X}(t) = \\boldsymbol{H Y}(t), \\text{ and } \\boldsymbol{Y}(t)_{:,j} \\in \\mathcal{P} \\ \\ \\forall j \\ \\ (R1),$$\nwhere $\\boldsymbol{H}$ corresponds to the mixing matrix, $\\boldsymbol{Y}(t)$ corresponds to the source estimates.\n\n\nLD-InfoMax [58] can be considered as a statistical interpretation of PMF approach, and it has the following optimization setting:\n$$\\text{maximize } \\frac{1}{2}\\log\\det(\\boldsymbol{\\hat{R}\\_y} + \\epsilon \\boldsymbol{I} ) - \\frac{1}{2}\\log\\det(\\boldsymbol{\\hat{R}\\_y} - \\boldsymbol{\\hat{R}\\_{yx}}(\\epsilon \\boldsymbol{I} + \\boldsymbol{\\hat{R}\\_{x}})^{-1} \\boldsymbol{\\hat{R}\\_{yx}}^T+ \\epsilon \\boldsymbol{I} ) \\text{ subject to } \\boldsymbol{Y}(t)_{:,j} \\in \\mathcal{P} \\ \\ \\forall j \\ \\ (R2),$$\nwhere $\\boldsymbol{\\hat{R}\\_{y}}$ and $\\boldsymbol{\\hat{R}\\_{yx}}$ correspond to the sample covariance matrix of the source estimates $\\boldsymbol{Y}(t)$ and the sample cross-covariance matrix between source estimates and mixtures, respectively.\n\nWe note that both the PMF algorithm in [25] and the LD-InfoMax algorithm in [58] are batch / off-line algorithms. In other words, at each iteration, these algorithms have access to all input data. Due to their batch nature, these algorithms typically achieved better performance results than our neural networks with online-restriction, as expected. We included the results of these new experiments in [Figure 3](https://figshare.com/s/07ebb57d19b77008e12c) (nonnegative antisparse case), [Figure 13](https://figshare.com/s/489eeb186c3d302630b1) (antisparse case) and Table 2 (sparse case). Moreover, we added Section E.1 in our revised manuscript for a brief discussion of these batch algorithms.",
" We thank you for your useful feedback and positive reviews. We respond to each of your questions and comments below.\n\n>Weaknesses 1: Many hyper-parameters to set and no clear rules on how to set them (but the paper is transparent about this which is a good point).\n\n>Question 1: Hyper-parameters setting: There are many hyper-parameters to set in this method. Did you perform a sensitivity analysis to see in which range they work ? Is there a way to find default values that would work for all kind of problems ? Can you explain how you chose the values in section E.2 and E.3 ?\n\nThank you for this comment. We agree that our Det-Max WSM framework uses several hyperparameters. Based on your comments, we include a new Appendix E.9 in the revised article to study the impact of certain hyperparameters that we observed to be relatively sensitive. \n\nThe Det-Max WSM networks have fairly reasonable performance around their nominal values that we chose for our experiments. To determine these nominal values, we first determined the initial values of hyper-parameters with trial and error. Then we performed greedy search by optimizing the parameters via changing one at a time. The best choice of hyper-parameters we obtained for our initial source domain also constituted as a good starting point for greedy hyper-parameter optimization in other source domains.\n\nIn section Appendix E.9 in our revised manuscript, we provide two brief ablation studies for the selection of $\\lambda_{\\text{SM}}$ and initialization of $\\boldsymbol{D}\\_1$. [Figure 23a](https://figshare.com/s/e666f43ca5631b7eb280) illustrates that the performance of our approach is relatively sensitive to the selection of $\\lambda_{\\text{SM}}$, and we obtain a near-optimal value $\\lambda_{\\text{SM}} = 1 - 10^{-4}$ as a result of this parameter search experiment. [Figure 23b](https://figshare.com/s/cba59736bcb6f05f04d7) demonstrates that the our network performance is less sensitive to the initialization of $\\boldsymbol{D}\\_1$ compared to $\\lambda_{\\text{SM}}$ since the algorithm relatively maintains its performance for different choice of gain variables.\n\n> Weaknesses 2: Experiments are on synthetic data and a toy dataset that does not really correspond to any realistic problem.\n\n>Question 2: Experiments on synthetic data: While the experiments on synthetic data show what they are meant to show, it would have been nice to see some experiments with real systems. ICA is used in many different settings: astronomy, neuroscience, finance (see the section 7 of Hyvärinen, Aapo, and Erkki Oja. \"Independent component analysis: algorithms and applications.\" Neural networks 13.4-5 (2000): 411-430.). Did you try any experiments with real data ?\n\nThanks for your suggestion. In our article, we performed two experiments with natural images. The experiment in Section 6.2 uses real natural images that are mixed using a random mixing matrix. These images constitute examples of naturally correlated sources. Furthermore, the sparse dictionary learning example in Appendix E.6 is also based on the patches obtained from real images. Furthermore, this example forms an example for the computational modeling of the primary visual cortex through the proposed biologically plausible neural network based on the $\\ell_1$-norm-ball polytope.\n\nFollowing your recommendation, we added an additional example from digital communication systems. Sources in digital communication are members of discrete sets, referred to as constellations, making the use of $\\ell_\\infty$-norm-ball polytope a perfect modeling assumption. Furthermore, random mixing matrices with identically distributed independent normal entries are common, and in many cases accurate models of the actual wireless propagation environment [i]. Therefore, simulations involving a wireless digital communication scenario using discrete sources and random matrix mixing would be highly realistic. In Appendix E.8. of the revised article, we provide such a simulation and demonstrate that the proposed neural network successfully handles multi-user separation task at a receiver. Note that the use of neural networks with local learning rules is still relevant for such tasks, as they enable low-power/low-complexity implementations in future neuromorphic systems with local learning constraints.\n\n[i] Jakes WC, Cox DC, editors. Microwave mobile communications. Wiley-IEEE press; 1994 Sep 1.",
" >Weaknesses 3: Missing comparison with Det-Max criterion.\n\n> Question 3: Missing comparison with Det-Max criterion According to your theorem 1: Problem (3) with Det-Max criterion and Problem (4) that yield the equations for WSM updates solve essentially the same problem. Therefore it would have seem natural to check whether they yield the same results in the synthetic experiments. Did you perform this comparison ? The Det-Max criterion is mathematically much simpler and seem easier to optimize. Why should practitioners use WSM instead ?\n\nFollowing your suggestion (and the suggestions of other reviewers), in the revised version of our manuscript, we included comparisons with Polytopic Matrix Factorization (PMF) [25] and Log-Det Mutual Information Maximization (LD-InfoMax) [58] for nonnegative antisparse source separation (Section 6.1), antisparse source separation (Section E.2), image separation (Section E.3), and sparse source separation (Section E.4) experiments. Both PMF and LD-InfoMax use Det-Max criterion in Problem (3) to solve blind source separation problem for potentially correlated sources.\nThese frameworks consider the off-line version of the separation scenario we discussed in Sections 2.2 and 2.3, and propose batch algorithms for its solution.\n\n\nPolytopic Matrix Factorization is based on the following optimization problem:\n$$\\text{maximize } \\det(\\boldsymbol{Y}(t) \\boldsymbol{Y}(t)^T) \\text{ subject to } \\boldsymbol{X}(t) = \\boldsymbol{H Y}(t), \\text{ and } \\boldsymbol{Y}(t)_{:,j} \\in \\mathcal{P} \\ \\ \\forall j \\ \\ (R1),$$\nwhere $\\boldsymbol{H}$ corresponds to the mixing matrix, $\\boldsymbol{Y}(t)$ corresponds to the source estimates.\n\n\nLD-InfoMax [58] can be considered as a statistical interpretation of PMF approach, and it has the following optimization setting:\n$$\\text{maximize } \\frac{1}{2}\\log\\det(\\boldsymbol{\\hat{R}\\_y} + \\epsilon \\boldsymbol{I} ) - \\frac{1}{2}\\log\\det(\\boldsymbol{\\hat{R}\\_y} - \\boldsymbol{\\hat{R}\\_{yx}}(\\epsilon \\boldsymbol{I} + \\boldsymbol{\\hat{R}\\_{x}})^{-1} \\boldsymbol{\\hat{R}\\_{yx}}^T+ \\epsilon \\boldsymbol{I} ) \\text{ subject to } \\boldsymbol{Y}(t)_{:,j} \\in \\mathcal{P} \\ \\ \\forall j \\ \\ (R2),$$\nwhere $\\boldsymbol{\\hat{R}\\_{y}}$ and $\\boldsymbol{\\hat{R}\\_{yx}}$ correspond to the sample covariance matrix of the source estimates $\\boldsymbol{Y}(t)$ and the sample cross-covariance matrix between source estimates and mixtures, respectively.\n\nWe note that both the PMF algorithm in [25] and the LD-InfoMax algorithm in [58] are batch / off-line algorithms. In other words, at each iteration, these algorithms have access to all input data. Due to their batch nature, these algorithms typically achieved better performance results than our neural networks with online-restriction, as expected. We included the results of these new experiments in [Figure 3](https://figshare.com/s/07ebb57d19b77008e12c) (nonnegative antisparse case), [Figure 13](https://figshare.com/s/489eeb186c3d302630b1) (antisparse case) and Table 2 (sparse case). Moreover, we added Section E.1 in our revised manuscript for a brief discussion of these batch algorithms.\n\nNote that the reason for the introduction of the WSM setting in (4) in Section 4, instead of direct use of the PMF or LD-InfoMax batch algorithms to solve the Det-Max problem in (3), is to be able to produce online optimization formulation in (5) and (6), which leads to biologically plausible neural networks with local update rules. Furthermore, we note that, although the implicit definition of the algorithm's output (see Algorithm 1 in Section E) for such biologically plausible neural networks makes the implementation less practical in digital hardware, they enable efficient low-power implementations in future analog neuromorphic systems with local learning rule constraints.",
" We thank you for your time and for your detailed review. We appreciate your rating our framework and results as solid. In the revised article and the comments below, we address your comments and concerns.\n\n> Novelty : The paper offers a general framework for deriving neural-network from the Det-Max approach, which is a novel contribution. They are the first (to the best of my knowledge) to propose a biologically-plausible algorithm that separates correlated sources. \n\nWe thank you for the positive feedback.\n\n> Experimental Evaluation: The only experiment that I am missing is one that compares WSM with algorithms that are able to separate correlated sources and reporting how the performance compares there. I think this could be a necessary addition to judge rating of the paper.\n\n> Questions: Please add more experiments and report SNIR comparing against algorithms that are able to separate correlated sources.\n\nFollowing your suggestion (and the suggestions of other reviewers), in the revised version of our manuscript, we included comparisons with Polytopic Matrix Factorization (PMF) [25] and Log-Det Mutual Information Maximization (LD-InfoMax) [58] for nonnegative antisparse source separation (Section 6.1), antisparse source separation (Section E.2), image separation (Section E.3), and sparse source separation (Section E.4) experiments. Both PMF and LD-InfoMax use Det-Max criterion in Problem (3) to solve blind source separation problem for potentially correlated sources.\nThese frameworks consider the off-line version of the separation scenario we discussed in Sections 2.2 and 2.3, and propose batch algorithms for its solution.\n\n\nPolytopic Matrix Factorization is based on the following optimization problem:\n$$\\text{maximize } \\det(\\boldsymbol{Y}(t) \\boldsymbol{Y}(t)^T) \\text{ subject to } \\boldsymbol{X}(t) = \\boldsymbol{H Y}(t), \\text{ and } \\boldsymbol{Y}(t)_{:,j} \\in \\mathcal{P} \\ \\ \\forall j \\ \\ (R1),$$\nwhere $\\boldsymbol{H}$ corresponds to the mixing matrix, $\\boldsymbol{Y}(t)$ corresponds to the source estimates.\n\n\nLD-InfoMax [58] can be considered as a statistical interpretation of PMF approach, and it has the following optimization setting:\n$$\\text{maximize } \\frac{1}{2}\\log\\det(\\boldsymbol{\\hat{R}\\_y} + \\epsilon \\boldsymbol{I} ) - \\frac{1}{2}\\log\\det(\\boldsymbol{\\hat{R}\\_y} - \\boldsymbol{\\hat{R}\\_{yx}}(\\epsilon \\boldsymbol{I} + \\boldsymbol{\\hat{R}\\_{x}})^{-1} \\boldsymbol{\\hat{R}\\_{yx}}^T+ \\epsilon \\boldsymbol{I} ) \\text{ subject to } \\boldsymbol{Y}(t)_{:,j} \\in \\mathcal{P} \\ \\ \\forall j \\ \\ (R2),$$\nwhere $\\boldsymbol{\\hat{R}\\_{y}}$ and $\\boldsymbol{\\hat{R}\\_{yx}}$ correspond to the sample covariance matrix of the source estimates $\\boldsymbol{Y}(t)$ and the sample cross-covariance matrix between source estimates and mixtures, respectively.\n\nWe note that both the PMF algorithm in [25] and the LD-InfoMax algorithm in [58] are batch / off-line algorithms. In other words, at each iteration, these algorithms have access to all input data. Due to their batch nature, these algorithms typically achieved better performance results than our neural networks with online-restriction, as expected. We included the results of these new experiments in [Figure 3](https://figshare.com/s/07ebb57d19b77008e12c) (nonnegative antisparse case), [Figure 13](https://figshare.com/s/489eeb186c3d302630b1) (antisparse case) and Table 2 (sparse case). Moreover, we added Section E.1 in our revised manuscript for a brief discussion of these batch algorithms.\n\n>Contribution Section: The paper is fairly clearly written, but I would suggest to add a section that states more clearly what the contributions of the paper are.\n\nThank you for your positive assessment. The second paragraph of Section 1.1 (Other related work) contains the contributions of our article in relation to the existing literature. However, based on your suggestion, we added a list of main contributions of the article before Section 1.1:\n\n \"In summary, our main contributions in this article are the following:\n\n* We propose a normative framework for generating biologically plausible neural networks that are capable of separating correlated sources from their mixtures by deriving them from a Det-Max objective function subject to source domain constraints.\n* Our framework can handle infinitely many source types by exploiting their source domain topology.\n* We demonstrate the performance of our networks in simulations with synthetic and realistic data.\n\"\n",
" The authors present a biologically-plausible algorithm for blind source separation (BSS) of correlated input sources. By applying weight similarity matching (WSM) approach to the Det-Max optimization algorithm used for BSS, the authors show that they can derive 2-layered Hebbian neural networks that are able to separate correlated sources from linear mixtures. They then compare their methods against Independent-Components Analysis (ICA) and Nonnegative Similarity Matching (NSM) methods in two tasks: one with artificial signals and another with natural images. In the task with artificial signals, the authors show how their algorithms is robust against correlation in the input sources whereas ICA and NSM degrade as measured by SINRs. The authors report results for the task of separating mixtures of natural images that are in-line with the former results, and they cherry-pick an example that illustrated the superior performance of their algorithm. \n\n The paper offers a general framework for deriving neural-network from the Det-Max approach, which is a novel contribution. They are the first (to the best of my knowledge) to propose a biologically-plausible algorithm that separates correlated sources. Overall, seems a worth contribution in terms of novelty. In terms of quality, I think the paper is very solid, and there are no obvious errors that I can see. The only experiment that I am missing is one that compares WSM with algorithms that are able to separate correlated sources and reporting how the performance compares there. I think this could be a necessary addition to judge rating of the paper. The paper is fairly clearly written, but I would suggest to add a section that states more clearly what the contributions of the paper are. I like that the authors highlight the limitations of their approach. In terms of significance, I think the paper is relevant for understanding how brains may accomplish BSS and to investigate novel biologically-inspired algorithmic improvements that can help advance state-of-the-art methods of BSS with correlated sources. Please add more experiments and report SNIR comparing against algorithms that are able to separate correlated sources. Yes.",
" This work focuses on the BSS problem and solves it by imposing some geometrical priors on the sources via an online weighted similarity matching algorithm (WSM). \nSince WSM does not use statistical independence of the sources to recover them, sources can be recovered even if they are correlated.\nWSM is benchmarked on 2 datasets (a synthetic and a toy dataset) and is shown to yield better performance than ICA or non-negative similarity matching (NSM). Strength:\n- A general framework that applies to a large set of priors on the sources\n- An online algorithm with local update rules that is therefore more biologically plausible\n\nWeaknesses:\n- Many hyper-parameters to set and no clear rules on how to set them (but the paper is transparent about this which is a good point)\n- Experiments are on synthetic data and a toy dataset that does not really correspond to any realistic problem\n- Missing comparison with Det-Max criterion\n\nI quickly reviewed the code: it lacks documentation and unit tests are missing but is overall well structured and readable. I would still advice the authors to document every public function, make unit tests, examples and set up a continuous integration so that other researchers can easily build upon their work. - Hyper-parameters setting: \nThere are many hyper-parameters to set in this method. Did you perform a sensitivity analysis to see in which range they work ? Is there a way to find default values that would work for all kind of problems ? Can you explain how you chose the values in section E.2 and E.3 ?\n\n- Experiments on synthetic data:\nWhile the experiments on synthetic data show what they are meant to show, it would have been nice to see some experiments with real systems. ICA is used in many different settings: astronomy, neuroscience, finance (see the section 7 of Hyvärinen, Aapo, and Erkki Oja. \"Independent component analysis: algorithms and applications.\" Neural networks 13.4-5 (2000): 411-430.). \nDid you try any experiments with real data ?\n\n- Missing comparison with Det-Max criterion\nAccording to your theorem 1: Problem (3) with Det-Max criterion and Problem (4) that yield the equations for WSM updates solve essentially the same problem. Therefore it would have seem natural to check whether they yield the same results in the synthetic experiments. Did you perform this comparison ? \nThe Det-Max criterion is mathematically much simpler and seem easier to optimize. Why should practitioners use WSM instead ? Ethical limitations are properly discussed",
" This work follows a recent line of work on formulating blind source separation problems as solutions of similarity matching objective functions. This work greatly expands existing work in the domain by proposing geometric interpretation and an objective function related to the Det-Max approach. Also, the formalism allows for the derivation of a biologically-plausible and online learning algorithm. Indeed, the model can be implemented by a two-layer neural network with local learning rules. The authors have covered a broad class of blind source separation problems. These problems generalize the well-known problems for which there existed or not biologically plausible learning algorithms. There is a broad modeling literature on BSS, and a growing one on similarity matching, and these generalizations and extensions of both is very novel. Although it is very technical the manuscript is well written and is easy fairly easy to follow. \n\nI found three minor weaknesses in this paper that can be easily addressed. \n\nOne is that the paper claims that the resulting algorithm is only, but only presents these results in the appendix. I would have appreciated that some space of the main paper be allocated to that as it is rather central to the paper. \n\nThe second one is related to the context of the problem. The paper is at the interface of signal processing, machine learning, and neuroscience, and it is a bit much to ask the reader to be well versed in all the different BSS problems covered in the paper. A bit more context for each of the problems would help understand the importance of each of these problems and why building such biologically plausible would be useful. Are natural data mixed or present in the form presented in this paper? \n\nFinally, the work mainly compares to nonnegative similarity matching and infomax, but it could be interesting to see how the model compares to existing algorithms designed to solve the problem at hand, not only biologically inspired ones. My questions relate to the minor weaknesses found above: \n\nCan you propose a concise presentation of the online result showing that your model can operate in that setting? \n\nCan you explain why would biological plausible model has to solve such problems? \n\nCan you compare your model to existing algorithms that were designed to solve it, and not only bio-inspired ones? The authors have addressed the limitations of the paper in the last section. ",
" The paper proposes a novel neural network architecture to perform a blind source separation task, specifically addressing the case of correlated sources. The proposed architecture is shallow (2 or 3 layers) and derived in an online manner to maximize biologically plausibility. The proposed architecture is shown to solve a determinant-maximization problem, as proved in Theorem 1 (line 172), and to adequately solve two synthetically correlated source separation toy examples. *Disclaimer: Due to my lack of expertise on the topic, I am not able to assess the soundness of the mathematical model nor the correctness of the theorem proof. I have also not read carefully the very long appendix (31 pages, 17 figures). Due to the length of the full manuscript, a journal might be a better publishing venue to benefit from more extensive reviews.*\n\n### Originality\nThe blind source separation problem tackled by the paper is a well-known problem, addressed by a wide literature of methods. The originality seems to resides in the special case of correlated sources (as opposed to e.g. ICA which specifically assumes independent sources).\n\nThe proposed approach builds upon a recent framework called weighted similarity matching (WSM) and introduced in [15]. The difference with related works is very briefly discussed lines 70-74.\n\n### Clarity\nThe paper is clearly written, but the relation to prior work is somewhat limited. A number of existing methods are listed, but the difference with the proposed method is not clearly explained. The paper would benefit from being more pedagogical about its different original contributions. \n\n### Quality\nThe numerical experiments to demonstrate the effectiveness of the model are quite limited. It would have been interesting to compare the proposed method with more methods. 1. What makes the proposed method well suited from correlated sources?\n\n1. Figure 3/4 only compare the proposed method with ICA and NSM, despite a larger number of related works listed in section 1.1: NMF, SSMF, SCA, BCA, PMF, BSM. Why not comparing with these other methods?\n - The paper stresses the computational cost of the proposed method, but does not give concrete examples. What is the computational cost of the proposed method (including hyperparameter tuning) in the two proposed numerical experiments? What is the computational cost of ICA and NSM?"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
1
] | [
"-QyJls9z8bx",
"Rkt2LG-MDY",
"TNrUMwo_pfZ",
"61N0b2AT2X",
"j-Wrs7f8E6",
"S6kPhx3_UB",
"rE-G4rASliQ",
"YDjWuJaG8vz",
"eIxESI6cv5t",
"j-Wrs7f8E6",
"j-Wrs7f8E6",
"j-Wrs7f8E6",
"8azOK4QpBcA",
"8azOK4QpBcA",
"8azOK4QpBcA",
"sJ6r3hLnie9",
"sJ6r3hLnie9",
"6Nq9meKaBg",
"nips_2022_espX_4CLr46",
"nips_2022_espX_4CLr46",
"nips_2022_espX_4CLr46",
"nips_2022_espX_4CLr46"
] |
nips_2022_QNjyrDBx6tz | Practical Adversarial Multivalid Conformal Prediction | We give a simple, generic conformal prediction method for sequential prediction that achieves target empirical coverage guarantees on adversarial data. It is computationally lightweight --- comparable to split conformal prediction --- but does not require having a held-out validation set, and so all data can be used for training models from which to derive a conformal score. Furthermore, it gives stronger than marginal coverage guarantees in two ways. First, it gives threshold-calibrated prediction sets that have correct empirical coverage even conditional on the threshold used to form the prediction set from the conformal score. Second, the user can specify an arbitrary collection of subsets of the feature space --- possibly intersecting --- and the coverage guarantees will also hold conditional on membership in each of these subsets. We call our algorithm MVP, short for MultiValid Prediction. We give both theory and an extensive set of empirical evaluations. | Accept | This paper proposes a conformal prediction based method for sequential prediction, relaxing the exchangeability assumption. It is robust to distribution shift, and achieves group-conditional coverage guarantees. The method is efficient, novel, and outperforms existing methods. All the reviewers, including myself, find the paper a solid contribution to the methodology and analysis, hence a clear accept. | val | [
"JnRYZu6KNfK",
"oyz1Rmm_Qsv",
"_wcu_bMmSkmd",
"y67dsvG4IEy",
"o8J9it7g9Xpv",
"XNedzv0CP3",
"EshtFYE56e",
"0R_FGzdN52p",
"sY2oxEv1w_Y",
"NAmLlkFpdJd",
"Wfr1IhR1vSG",
"ISHdf4AHB_"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you --- and thanks once again for the time you spent reviewing!",
" Thank you for the clarifications! My concerns are resolved. I recommend acceptance.",
" I appreciate the detailed rebuttal and clarifications.",
" Thank you!",
" Thank you for the clarifications. While most of my concerns are resolved, it will be valuable to include experiments that demonstrate the effect of $\\eta$ as well as robustness to violations of Definition 3.1. I recommend acceptance.",
" Thank you for your careful review! Your feedback about the exposition resonates, and we will take your comments seriously as we revise the paper to improve clarity. \n\n**Choosing $\\eta$:** In all of our experiments, we set $\\eta$ as specified by Theorem 3.1, and so we do not treat it as a hyper-parameter in our work. It is true that it might be possible to further improve the experimental results by doing hyperparameter optimization to set $\\eta$, but we do not explore this since the algorithm works well with the theoretically motivated setting for $\\eta$, and this simplifies the algorithm by eliminating a hyperparameter. \n\n**Checking Definition 3.1:** It is true that we cannot verify from samples that a sequence of non-conformity scores is generated from a process that satisfies Definition 3.1. But, we can verify that MVP's coverage guarantees on a stream of data --- in all of our experiments, it obtains its target coverage guarantees without the need to separately perturb the nonconformity scores. Note that every continuous distribution with bounded density is $(\\rho,r\\cdot m)$ smooth for sufficiently large $r$, and our algorithm can be instantiated with an arbitrarily large value of $r$ without any cost in either run time or convergence; thus, we can always enforce the smoothness condition if needed by adding noise that is uniform in the interval $[0,\\epsilon]$ for arbitrarily small $\\epsilon$. If we do this, we continue to guarantee valid coverage of the original non-conformity scores by increasing the threshold at prediction time by the same (arbitrarily small) $\\epsilon$. We will add a discussion to our paper to clarify this point, and note that we never need to do so in any of our experiments. \n\n**Comparisons to Split Conformal Prediction:** In our experiments, we use split conformal methods in an ``online mode'' --- as we make predictions in sequence, we add points that we have seen either to the calibration or training set, and retrain and re-calibrate the models at each step (except for the Imagenet experiment, in which we use a pre-trained model). So while you are right that it would be interesting to add ACI as a comparison, we are giving the split conformal methods the opportunity to make use of data as it arrives, just as we give to MVP.",
" \n**Figure 4a, undercoverage:** ACI undercovers on all the displayed groups other than the entire sequence (on which it achieves exact marginal coverage), but it actually significantly over-covers on the complement of the union of these groups (not shown in the plot), thus compensating for the displayed under-coverage on the groups of interest. We will add a sentence clarifying this point in the revision.\n\n**ImageNet:** Re the calibration set size we use for RAPS: We experimented with larger calibration sets, but they did not affect the results significantly. Since RAPS is a split conformal method, it uses a single threshold and so is threshold calibrated, but because of this cannot offer group conditional guarantees. We did not repeat our group conditional and distribution shift experiments on the ImageNet data, in part because our paper including our experimental evaluation was already very long, but you are right that adding these would make the experimental section more comprehensive, and we will plan to do so. \n\n**Threshold Conditional Coverage:** Yes, in our time series experiments with increasing nonconformity scores, we show that ACI alternates between full coverage and under-coverage, which shows a lack of threshold calibrated coverage (since, e.g., conditional on predicting the trivial prediction set/threshold, the coverage must be 100\\%). \n\nThanks once again for the detailed suggestions in your review --- these will be very helpful as we edit for clarity in the revision!\n",
" \nThank you so much for the effort that you put into your detailed review! We will take your expositional points to heart and incorporate your suggestions to improve clarity. \n\n**Relationship to Gupta et al.:** The most important improvement compared to Gupta et al. is this: Their algorithm for obtaining multivalid coverage involved, at every round, calling the Ellipsoid algorithm with a greedy separation oracle to solve a linear program with infinitely many constraints and hence has no practical implementation (and they do not implement it). We give a very simple combinatorial algorithm for the same problem, give a fast implementation, and run a set of experimental comparisons. We also give various improvements related to the convergence rates, but we appreciate that the discussion of some of these could be moved to the supplement to avoid obscuring the main point.\n\n**Comparison to [a] and [b]:** We note that [a] and [b] both study a very different kind of conditional guarantee --- validity conditional on the training/calibration set. They do not provide the kind of conditional coverage guarantee that we provide, which conditions on both the group membership of a point at test time as well as on the threshold used at test time. (As an aside, in the online setup, there is no distinction between the training and test set; instead, we provide the stronger guarantee of coverage for every realized transcript.) We will add detailed discussions of [a] and [b] to our related work.\n\nMore generally, we note that split conformal prediction (and all prior work we are aware of) is currently unable to provide the kind of tight group-conditional coverage guarantees we have when the groups intersect, even under the assumption that the data is exchangeable. Two prior papers (which we cite) have studied group conditional coverage in the exchangeable setting: Romano et al. (2020) and Barber et al. (2020). Romano et al. consider separately calibrating on each group, but this method only applies to disjoint groups. Barber et al. study intersecting groups and separately calibrate on each one, and then at test time, use the most conservative threshold amongst all groups that a test example $x$ is a member of. This method over-covers --- i.e. it does not converge to the target coverage level even in the limit (and does not offer threshold conditional coverage).\n\nQuestions about the plots/experiments: \n\n**Figure 2, difficulty of groups**: Indeed, in this synthetic data experiment, although we have 20 overlapping groups, by construction, there is (unknown to the algorithm) one low noise group (group 0) and one high noise group (group 1). The other groups intersect with these but are not themselves noise-relevant. Although it is simple, we think this is an interesting experiment because the algorithm must discover which groups are noise relevant. And despite it's simplicity, this setup is already enough to demonstrate that prior methods do not achieve tight group conditional coverage.\nIn Figure 3, we repeat the same experiment on real census derived data, with groups defined by race and gender. As you predict, coverage levels vary in more interesting (although less pronounced) ways across these groups. \n\n**Adding noise in time series experiments:** We chose the stock prediction dataset so that we can do a direct comparison to ACI on the dataset they use in their paper.\nWe add noise to make the behavior of the different subgroups (i.e., different days of the week) we consider interesting.\nIn particular, financial data does not exhibit noticeably different statistical properties on any simple groupings of days (or else this would be exploitable). We add noise to different subsets of the days to produce an experiment in which the uncertainty of the model is quantifiably different within different groups. We will add a sentence to clarify why our subgroup experiment is synthetic in the revision. \n\n**Figure 3**: We compare to two variants of split conformal prediction. The first variant ignores the group structure, and hence uses the same threshold on all examples. The second (the method of Barber et al. 2020) calibrates a threshold for each group, and then uses the most conservative threshold amongst all groups an example is a member of --- this one uses different thresholds for different examples. \n",
" Thank you for your review! We take to heart your point that our paper is heavy with notation, and will look for ways to reduce the notational burden while maintaining rigor. Below we answer your questions:\n\n**Strength of our adversary:** The adversary in the online adversarial model is indeed very strong, and the fact that we can get positive results in this model means that our algorithm is very robust. Specifically, the adversary can choose any covariates $x$ and any distribution over labels $y$ at each step; the only constraint being that the induced distributions over non-conformity scores are ``smooth'', which is satisfied (with some parameters) whenever the distribution on non-conformity scores is continuous. Assumptions like this are always necessary when the goal is not just to get conservative coverage, but to converge to the target coverage rate exactly. A common technique in the literature to enforce this assumption if it does not hold in the data naturally is to perturb non-conformity scores with arbitrarily small amounts of continuous noise. \n\n**Comparsion to Tibshirani et al. [2019]:** Our assumption is substantially weaker than the covariate shift setting of Tibshirani et al [2019], which assumes that the data shifts a single time in a known and proscribed way between calibration and test: in particular, that the feature distribution changes via a reweighting using known propensity scores, and that the conditional label distribution remains unchanged. In the online setting we consider, there is no division drawn between calibration and test, and the feature distribution can change in arbitrary ways at every round (that need not be via known propensity scores), as can the conditional label distribution. \n\n**Comparison to [1]**: [1] studies \"adversarially perturbed data\" in the spirit of the \"adversarial examples\" literature. They assume that the data is exchangeable, but that the test data is perturbed by an adversary who is limited to modifying each example's features within a ball of small norm. Their techniques are very different from ours (and much closer to traditional split conformal prediction), and leverage the fact that the underlying distribution (except for the perturbations) is exchangeable, which we do not require. We will add a detailed discussion of [1] to our related work.\n\n**Conditional coverage on classification tasks:** Our ImageNet experiments demonstrate that our techniques can be applied to arbitrary non-conformity scores, including those recently developed for classification settings. On this experiment, we did not repeat our earlier investigations of subgroup and threshold conditional coverage and distribution shift since we investigate these issues extensively in our other experiments (and our paper is already quite long). \n\n\nOur actual algorithm operates directly on the nonconformity scores and is agnostic to the process that generates them (in particular, if the task is classification or regression), so we have no reason to expect the results to be different. Nevertheless, you are right that this would be a natural experiment that would make our collection of results more comprehensive, and we will consider adding it. \n\n**Number of calibration buckets:** In all of our experiments we choose m = 40 calibration buckets. This isn't a principled choice (i.e. we did not do any hyper-parameter optimization on a task by task basis) --- just a simple choice that seemed to work well consistently. With more careful hyper-parameter tuning our results would presumably improve slightly. ",
" This paper presents an online conformal prediction algorithm, which is proven to satisfy a “threshold-calibrated multivalid coverage guarantee” (Definition 2.1). Compared with prior work, several advantages of the proposed algorithm are: 1) it does not require the exchangeability assumption for proving the coverage guarantee, thus it can be used to model sequential data such as time series; 2) it is claimed to be robust to arbitrary and unanticipated distribution shift; 3) the obtained coverage guarantee is not just marginally but promises group-conditional coverage; 4) it achieves improved statistical rate over the work of [Gupta et al., 2022]. Experiments on synthetic data, time-series data and image data support these advantages over existing methods.\n Strength:\n\n1. The paper is well-motivated, and its contribution is significant to the field of conformal prediction. The proposed method achieves both worst-case empirical coverage and calibrated, multivalid coverage, which clearly advances the existing literature.\n\n2. The paper discusses the comparisons with related works well, and is technically solid with clearly presented definitions, assumptions, theorems, and proofs.\n\n3. The proposed surrogate loss (Definition 3.4) and its induced techniques for proving the threshold-calibrated multivalid coverage is theoretically interesting.\n\n\nWeakness:\n\n1. Section 2 and Section 3 are overloaded with technical notations. The motivations of the proposed coverage guarantee (Definition 2.1) and the specific design of Algorithm 1 are not well-explained, at least intuitively. For example, Definition 2.1 introduces threshold calibration, you may want to explain why you want the marginal coverage to be additionally satisfied for multiple buckets simultaneously, as opposed to the case where there is just a single bucket. \n\n2. In line 32, the authors claimed that the proposed method has worse-case adversarial guarantees, which is very strong in my perspective. However, the threat model is not defined clearly (without a formal definition) at the beginning, thus hard to evaluate its significance. What are the differences between the adversarial setting considered here and the setting of covariate shift such as Tibshirani et al. [2019]? Does the adversary need to satisfy some assumptions (e.g., Definition 3.1)? [1] is a related paper on this, so should be mentioned or discussed in the paper as well.\n\n[1] Adversarially Robust Conformal Prediction, Asaf Gendler and Tsui-Wei Weng and Luca Daniel and Yaniv Romano, ICLR 2022\n\n\nTypos: \n\nLine 148: n is not defined\n\nLine 149: w belongs to -> q belongs to\n 1. In the experiments, most of the considered tasks are regression tasks. You only compared with Angelopoulos et al. [2020] on marginal coverage, how about the conditional coverage for subgroups? In general, I am curious about the conditional coverage performance of your algorithm for classification tasks.\n\n2. How do you decide the number of buckets m for your algorithm for a practical task? Yes",
" The authors propose a calibration method in the spirit of (adaptive) conformal prediction for providing (conditional) coverage guarantees when prediction confidence intervals/sets without necessarily requiring exchangeable examples. Thus, the method is applicable to dependent time series data in addition to standard i.i.d. settings. In comparison to prior work, the method provide coverage conditional on arbitrary subgroups of inputs and conditional on the calibrated threshold in timer series data. To this end, an adversarial setting is assumed where the adversary chooses inputs and conformity scores from a distribution that is assumes to be reasonably smooth (i.e., not concentrated on individual points) – this is the only assumption made as far as I can tell. Strengths:\n- The paper is generally well-structured and notation is introduced clearly – even though the paper is very notation heavy in general, see weaknesses.\n- The advantages of the proposed method are described fairly clearly in the intro.\n- Generally, the paper addresses an important problem of (conditional) coverage guarantees without exchangeable data. While various papers have addressed individual parts (e.g., group-conditional coverage or calibration set conditional coverage), to the best of my knowledge, obtaining coverage conditioned on arbitrary groups as well as thresholds without requiring exchangeability is not possible so far.\n- The authors demonstrate their approach in various settings similar to previous papers to demonstrate the method on various tasks involving standard classification and regression, time series data or covariate shift, mostly focusing on coverage and inefficiency (set/interval size).\n- A sketch of the proof is provided in the paper.\n- The appendix obtains sufficient details on experimental setup.\n\nWeaknesses:\n- The paper is very dense and notation-heavy. I appreciate that this is necessary for communicating the guarantee and a sketch of the proof, but I believe the authors could improve structure and writing that would make following the paper easier. Here are som individual points that I stumbled over:\n\n - Lines 39ff, the point of threshold-conditional coverage is not very clear imo.\n\n - Lines 56ff, this argument is also not very clear when the reader is unfamiliar with Gupta et al. - I skimmed through Gupta et al., but it is a very dense paper itself. \n\n - Footnote 1 seems rather important in terms of supporting one of the contributions – generally, I think a footnote is not the right place for this even if it might save space.\n\n - The introduction to notation in section 2 is helpful, but generally I would prefer it the authors would repeat the notation inline at appropriate places. Currently, nearly in every statement, I need to scroll back and check notation. Alternatively, a separate notation section could help, because sometimes it is also not clear where to scroll to to find the notation being introduced.\n\n - An example of the above is the definition of G_T(i) in definition 2.1.\n\n - Another example in K_\\epsilon in the theorem.\n\n - Definition 2.1 is a bit unclear regarding the choice of \\alpha – the choices provided seem somewhat arbitrary in the beginning and there is no discussion of this. It only becomes slightly clearer later in the paper.\n\n - Surrounding Definition 3.1, I think the authors could do a better job providing an intuition of the adversary. In the beginning it is very unclear what the adversary can do and what not. Also it is unclear what the adversary is supposed to model in practice (i.e., in practice, the scores are computed by the model and the adversary is just a way to model this adversarially in order to obtain a guarantee).\n\n - Algorithm 1 is typset too early in the paper in my opinion. How it works only becomes clear throughout the proof sketch. In fact, the proof sketch is used to introduce the method very indirectly. I spent half an our on the algorithm assuming that I would have to understand how it works, realizing later that the proof sketch is basically meant to walk me through it. I think this a big flaw structure and notation wise.\n\n - There should be more discussion on the theorem, what this bound means in practice, how different variables influence it – this is partly done in the following remark but in my opinion not sufficient to give a good understanding. Might also be because the analysis comes later.\n\n - 3.1 starts using notation from the algorithm, which was very hard to follow without reading 3.1 first. So this is a bit of a circle which should be broken in 3.1 or by moving the algorithm. It is also not made clear in the beginning that 3.1 will actually explains the algorithm.\n\n - Observation 3.1 is not straight forward. I feel this is by construction but should be worth 1-2 sentences of why this is.\n\n - In definition 3.4, the index t in L_t, V_t and \\pi_t is unclear. Is the t used in V_t the same as in L_t or \\pi_t (the arguments). This seems relevant as these seem disentangled in Lemma 3.1.\n\n- Regarding related work, this paper seems to build on the work by Gupta et al. Generally, I would appreciate making the similarities clearer without requiring to read Gupta et al. But as the paper is already dense I would actually prefer not going into detail in comparison to Gupta et al. in the main paper but rather deferring that to the appendix and making this clear in the beginning of the main paper, this could also save some space. Nevertheless, can the authors give a concise discussion of the differences to Gupta et al. And what they built on/develop further? In the paper this is very spread and I was unable to get a clear picture.\n- I think that split conformal prediction does provide calibration-set conditional coverage as discussed in detail in [a] and [b] and I think discussing these results would be important. This is because, assuming exchangeability (I know that this work goes beyond exchangeability), the benefit of MVP over split conformal prediction is limited – both are able to obtain calibration set and group-conditional coverage (when groups are known which AFAIK is the case here, too). Can the authors comment on this? (I acknowledge that [b] is very recent and more like concurrent work)\n\n[a] Vovk. Conditional Validity of Inductive Conformal Predictors\n[b] Bian et al. Training-conditional coverage for distribution-free predictive inference\n\n- In figure 2, the groups seem to be very easy, split conformal prediction also provides conditional coverage except for one group. For me it looks like the groups are not really interesting – usually I would expect groups to obtain widely varying coverage levels if these groups are aligned somewhat with difficulty (like classes, fairness attributes or so).\n- In the time series experiments, I am not sure whether I understand the motivation of adding noise. Can the authors comment on this? Is this just to make the task harder and show that MVP works better (because it doesn’t work better without noise)?\n- In figure 3, what are the average lengths for both methods? Also, split conformal prediction always predicting the same interval is mainly due to the coarse histogram, right?\n- In figure 4 a, shouldn’t there be any groups where ACI is overcovering. Marginal coverage is guaranteed but it underestimates coverage on all of them. Am I missing something?\n- On ImageNet, the calibration set is very small, and RAPS still performs better (and RAPS is even less efficient then simpler conformity scores, see [c]). Does this mean that MVP is generally less relevant in split settings with exchangeability? Or does MVP still obtain better class/group-conditional coverage on ImageNet? (These experiments are missing in my opinion.)\n\n[c] https://arxiv.org/abs/2110.09192\n\n- Also, how would the ImageNet results change when performing trials instead of fixing calibration and test set?\n- In any of the non-ImageNet experiments, are there cases where baselines do not provide calibration set/threshold-conditional coverage and could this be evaluated somehow?\n\nConclusion:\nI believe the proposed method is a good contribution to the community. I only see two drawbacks: First, the paper is very difficult to follow, requiring the reader to sit down and be able to annotate the paper and write down definitions/notation to follow the algorithm and discussion. I think this could be improved by devoting more space to it and maybe moving some experiments or discussion of Gupta et al. to the appendix. Second, for some experiments the picture is less clear and this should be discussed a bit better – i.e., in which settings the proposed method really works well/better. This also includes evaluating or commenting the threshold-conditional coverage which is “advertised” in the introduction. While I was unable to check the proof in the appendix in detail, due to density and amount of experiments, the proof sketch seems reasonable and valid as far as I can tell, but it would be good if another review could confirm this. See weaknesses and conclusion above. See weaknesses and conclusion above.",
" The paper presents a novel, lightweight conformal-inspired algorithm for quantifying uncertainty in an online setting. This is a good contribution: it is highly competitive and even outperforms existing methods for making sequential predictions with valid coverage, even under adversarial settings. **Strengths**\n\nThe proposed framework has several key advantages: \n\n(1) It uses the entire data for training a predictive model and does not require using holdout data for calibration.\n\n(2) The coverage guarantee holds in an adversarial setting.\n\n(3) The coverage guarantee holds conditionally on the threshold used to form the prediction set.\n\n(4) The coverage guarantee holds conditionally on possibly intersecting groups, which needed to be defined apriori.\n\n(5) Numerical experiments demonstrate the validity of the theory, showing the advantage of this method in a wide variety of settings. In particular, I find the results presented in Section B.3 illuminating.\n\n**Weaknesses**\n\n(1)\tThe paper is written to experts in conformal prediction, and I am worried that this will hurt the deployment of the proposed method. Specifically,\n\na. Consider the introduction for example: what is a non-conformity score? What is $f$? What is $S_t$? \n\nb. Next, moving to Algorithm 1: how the user should interpret each of the steps? Perhaps the authors can name, or explain in words the purpose of each step so that the reader will not lose intuition. \n\nc. Consider Theorem 3.1: it asks to set $\\eta $, but where $\\eta $ is being used? It took me some time to understand that $\\eta$ appears in the surrogate loss. This makes me wonder how does it affect the performance? How should the user set this parameter?\n\nd. Consider pausing after Theorem 1 and explain it in words to ease the interpretation of this result.\n\ne. Section 3.1 starts with a discussion about the differences between the proposed method and the one of Gupta et al. (2022). But, at this point, the reader is still trying to understand the implications of Theorem 1. Consider moving this discussion after sketching the proof. \n\n(2) It is not clear how to check (online, as data arrive) that Definition 3.1 holds for a given triplet (\\rho,r,m). I understand the statement that “We can also algorithmically enforce smoothness by perturbing the conformal scores with small amounts of noise from any continuous distribution, and so we should think of smoothness as a mild assumption”. But this will then reduce statistical efficiency, as algorithmic noise will be added to the scores. And also, how can we even know that we violate Definition 3.1?\n\n(3) Section B.1: If I understand the experiment correctly, I believe the authors should also compare their method to ACI. Compared to split conformal, the gain in performance is due to the fact that MVP is applied online and re-fits the predictive model once a new test point is observed; this setting is more similar to the way that ACI (or its time-series version [ACI-TS]) is implemented.\n\n[ACI-TS] Zaffran, Margaux, Aymeric Dieuleveut, Olivier Féron, Yannig Goude, and Julie Josse. \"Adaptive conformal predictions for time series.\" arXiv preprint arXiv:2202.07282 (2022).\n\n(4) Section B.2.1: same comment as above.\n\n(5) Section B.4: same comment as above. See the comments under the weaknesses section. The authors included a discussion about ethical considerations."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"oyz1Rmm_Qsv",
"sY2oxEv1w_Y",
"0R_FGzdN52p",
"o8J9it7g9Xpv",
"XNedzv0CP3",
"ISHdf4AHB_",
"Wfr1IhR1vSG",
"Wfr1IhR1vSG",
"NAmLlkFpdJd",
"nips_2022_QNjyrDBx6tz",
"nips_2022_QNjyrDBx6tz",
"nips_2022_QNjyrDBx6tz"
] |
nips_2022_KxVSnZVuZZ | Constrained Langevin Algorithms with L-mixing External Random Variables | Langevin algorithms are gradient descent methods augmented with additive noise, and are widely used in Markov Chain Monte Carlo (MCMC) sampling, optimization, and machine learning. In recent years, the non-asymptotic analysis of Langevin algorithms for non-convex learning has been extensively explored. For constrained problems with non-convex losses over a compact convex domain with IID data variables, the projected Langevin algorithm achieves a deviation of $O(T^{-1/4} (\log T)^{1/2})$ from its target distribution \cite{lamperski2021projected} in $1$-Wasserstein distance. In this paper, we obtain a deviation of $O(T^{-1/2} \log T)$ in $1$-Wasserstein distance for non-convex losses with $L$-mixing data variables and polyhedral constraints (which are not necessarily bounded). This improves on the previous bound for constrained problems and matches the best-known bound for unconstrained problems.
| Accept | After going through all the reviews, rebuttals, and discussions in detail I am recommending a borderline acceptance for the paper. More precisely, the technical contribution of the paper is significant, even though there have been some concerns raised regarding the motivation/applicability of the setup. However, I do believe that the merits of the paper outweigh its limitations. I recommend the authors to implement all the suggested changes. | train | [
"K6LmPBMtohrr",
"fEVm8__a9zq4",
"x_9DpKnkbGx",
"QcpBf0Zb559",
"VSgLg21mLfd",
"y-PbbUf7NVCh",
"sFp5nKQbQeR",
"3fVWpGPMLPY",
"LWRuIv1_Zaa",
"RV0n2SvFb4n",
"z_B--AlSmpx"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I stand by my original evaluation.",
" (5) Response to Q2:\n\n- Sampling from Gibbs distribution, i.e. Langevin algorithms is used for high-dimensional and large-scale sampling applications. Gibbs distribution is an invariant distribution of the Langevin dynamics. And Langevin dynamics can converge to arbitrary probability distributions, so we can sample from a large amount of distribution via Langevin algorithms.\n \n- It is related to Bayesian posterior sampling. Langevin algorithms can be used to sample from the posterior in Baysian settings. When setting $\\beta = 1$, Langevin algorithms can be used for posterior sampling, which is the basic idea on Bayesian learning. More details about Langevin algorithms for Bayesian learning is introduced in [welling2011bayesian].\n \n- It is not only for obtaining non-convex optimization guarantees. Instead, Theorem 1 indicates the distance between a target distribution and the distribution of the samples from the algorithms. Some extra work will be added to the revision which shows the near-optimality of the algorithms iterates.\n\n(6) Response to Q3:\n \nLSI method in Vempala and Wibisono utilizes the KL divergence, which is infinite when the initial condition is deterministic. Therefore, methods using LSIs typically require the initial density to be supported everywhere. However, Wasserstein distance is well defined with deterministic initial condition. So we choose to use Eberle's result which uses Wasserstein distance. The detailed reasoning is below:\n \nLet $q(x)$ be the stationary distribution, and $q(x) = \\frac{1}{\\sqrt{2 \\pi}} e^{-x^2/2}$, which is a standard Gaussian distribution.\n\nLet the initial distribution $p(x)$ be Gaussian with mean 0 and variance $\\sigma^2$. So, $p(x) = \\frac{1}{\\sqrt{2 \\pi}\\sigma} e^{-x^2/(2\\sigma^2)}$. As $\\sigma \\rightarrow 0$, $p(x)$ approaches to a deterministic distribution which concentrates at 0 with probability 1.\n\nThe KL divergence can be computed as $\\int p(x) \\log \\frac{p(x)}{q(x)} dx = \\frac{\\sigma^2}{2}- \\frac{1}{2} - \\log \\sigma$ which goes to $\\infty$ as $\\sigma \\rightarrow 0$.\n\nIn contrast, $W_1$ remains bounded since both $p(x)$ and $q(x)$ have bounded variance.\nIn particular, if $P$ and $Q$ are the respective measures, then \n\\begin{align*}\nW_1(P,Q) \\le \\int \\|x\\| p(x) dx + \\int \\|x\\| q(x) dx \\le \\sigma +1.\n\\end{align*}\n\nIn particular, when $P$ corresponds to the Dirac delta centered at 0 (i.e. we have $x=1$ with probability 1), we have $W_1(P,Q) = \\int \\|x\\| q(x) dx \\le 1$.\n\nSo we can see that $W_1$ is more flexible which allows the deterministic initialization. This is why we use Eberle's result.\n\nTo clarify this point, we will make some comments on work using LSIs in the revision.\n\n",
" (1) Response to Major Concern 1:\n\nHere we summarize the contribution of this paper, which also implies the motivation of solving this specific problem. \n\nOur work derives a tight convergence rate of Langevin algorithms with L-mixing process under polyhedral constraints. Polyhedral constraint is a common constraint of optimization problems, and include boxes, orthants, and simplex constraints. Though polyhedral constraint as a special type of the convex constraint seems less general compared with the compact convex constraint in [lamperski2021projected] and the non-convex constraint in [sato2022convergence], we remove the boundedness assumption of the constraint set. \n\nSpecifying L-mixing data variables in our analysis is an extension to the existing works which either have no external random variables or have IID external random variables. Though the work [chau2019stochastic] also considers the L-mixing random variables, but there is no constraint in that case. We want to emphasize that L-mixing random variables generalize the IID assumption in [raginsky2017non,lamperski2021projected], in particular it covers many stable Markov models. More specifically, all uniformly ergodic Markov chain can be proved to be L-mixing. Thus, the convergence rate of Langevin algorithms with L-mixing data variables we obtain gives theoretical guarantee to a broad set of real-world problems. Moreover, our result matches the result of the work [chau2019stochastic], which is the first work considering the L-mixing random variables but without constraints at all. Though with polyhedral constraints, our work matches the best known convergence rate in the unconstrained Langevin algorithms for non-convex problems [chau2019stochastic] and tighter than the work only considering IID random variables [lamperski2021projected].\n\nOverall, our work fills in the gap of the theoretical analysis of constrained Langevin algorithms with dependent data streams and provides convergence guarantees of Langevin algorithms with arbitrary polyhedral constraints and L-mixing data variables.\n\n(2) Response to Major Concern 2:\n\nYou are correct that we have this exponential term $e^{\\beta \\ell R^2/8}$, but it is not dependent on the dimension. Instead, such an exponential bound depends on the parameter $\\beta$ and $R$. We will clarify how the constants depends on $\\beta$ and scale with $beta$. Besides, we will make a more explicit expression for the main constants $c_1, \\ldots, c_4$ in the revision such that the dimension and parameter dependencies will be shown clearly.\n\n(3) Response to \"Insufficient sampling literature\":\n\nBased on this suggestion, we will update the literature review in the revision. Below is a review summary including the recent unconstrained sampling results.\n\nMore recent relevant work are given in the list below:\n- Difan Zou, Pan Xu, and Quanquan Gu. Faster convergence of stochastic gradient langevin\n dynamics for non-log-concave sampling. In Uncertainty in Artificial Intelligence, pages 1152–1162. PMLR, 2021.\n- Ruilin Li, Hongyuan Zha, and Molei Tao. Sqrt (d) dimension dependence of langevin monte\n carlo. arXiv preprint arXiv:2109.03839, 2021.\n- Murat A Erdogdu, Rasa Hosseinzadeh, and Shunshi Zhang. Convergence of langevin monte\n carlo in chi-squared and rényi divergence. In International Conference on Artificial Intelligence\n and Statistics, pages 8151–8175. PMLR, 2022.\n- Krishna Balasubramanian, Sinho Chewi, Murat A Erdogdu, Adil Salim, and Shunshi Zhang.\n Towards a theory of non-log-concave sampling: first-order stationarity guarantees for langevin\n monte carlo. In Conference on Learning Theory, pages 2896–2923. PMLR, 2022.\n- Dao Nguyen, Xin Dang, and Yixin Chen. Unadjusted langevin algorithm for non-convex weakly\n smooth potentials. arXiv preprint arXiv:2101.06369, 2021.\n- Joseph Lehec. The langevin monte carlo algorithm in the non-smooth log-concave case. arXiv\n preprint arXiv:2101.10695, 2021.\n- Sinho Chewi, Murat A Erdogdu, Mufan Bill Li, Ruoqi Shen, and Matthew Zhang. Analysis of\n langevin monte carlo from poincar\\’e to log-sobolev. arXiv preprint arXiv:2112.12662, 2021.\n\nWe will have more discussions on these works in the revision.\n\n(4) Response to Q1:\n\nThe dimensional dependency is $\\mathcal{O}(n)$ based on the list of constants in the appendix. The detail of getting such a dimensional dependency is shown below:\n\nIn Theorem 1, $c_1, c_2, c_3, c_4$ show up. Using the constant list, we can see $c_2$ and $c_4$ has no dimensional dependency, $c_1$ has $\\mathcal{O}(\\sqrt{n})$) dependency and $c_3$ has $\\mathcal{O}(\\sqrt{n}) + \\mathcal{O}(n)$ dependency. So overall, the dimensional dependency is $\\mathcal{O}(n)$.\n\n\n\n\n",
" Thanks for the valuable comments. Below, we will respond to the weaknesses, limitations you pointed out and answer the questions you brought up.\n\n(1) Response to Weaknesses and Limitations:\n\nWe want to clarify that specifying L-mixing data variables is not a limitation of our work, but an extension to the existing works. Most of the existing literature do not consider the external random variables at all [dalalyan2012sparse,dalalyan2017theoretical,durmus2017nonasymptotic,ma2019sampling,majka2018non, wang2020fast]. There are some works considering the external randomness, but only restricted to IID random variables [raginsky2017non,lamperski2021projected]. Though the work [chau2019stochastic] also considers the L-mixing random variables, there is no constraint in that case. We want to emphasize that L-mixing random variables generalize the IID assumption, in particular it covers most of the stable Markov model. More specifically, all uniformly ergodic Markov chain can be proved to be L-mixing. Thus, the convergence rate of Langevin algorithms with L-mixing data variables we obtain gives theoretical guarantees to a broad set of real-world sampling and optimization problems. \n\nWe are clear about the limitation of the constraint assumption. Though polyhedral constraint as a special type of convex constraint seems less general compared with the compact convex constraint in [lamperski2021projected] and the non-convex constraint in [sato2022convergence], we remove the boundedness assumption of the constraint set. In that sense, our work is dealing with a weaker assumption in terms of boundedness. Not to mention that a lot of works do not consider constraints at all [raginsky2017non,xu2018global,erdogdu2018global,cheng2018sharp,chen2020stationary]. Therefore, though the constraint is specified as a polyhedron in our case, we still make progress compared with the existing work. Besides, the polyhedral constraint is very common in applications with box and simplex constraints, so the convergence analysis of Langevin algorithms with polyhedral constraints is of practical value.\n\nA specific example matches our assumption is the system identification of the autoregressive model. We can estimate the output via a neural network and define the loss function as the mean square error, which is stated in [chau2019stochastic]. This is a typical non-convex learning problem and if we clip the parameters of the neural network, the constraint is polyhedral. Besides, the regularization term in the loss function makes the strong convexity outside a ball assumption holds.\n\n(2) Response to Questions:\n\nWe present a practical case above. To the best of our knowledge, our convergence rate matches the best known rate of the unconstrained Langevin algorithms for non-convex problems [chau2019stochastic], but arbitrary polyhedral constraints are enforced in our case. In other words, our work provides theoretical guarantees of the convergence of the constrained Langevin algorithms with L-mixing random variables, which covers the case of IID random variables and thus is more general.\n",
" (5) Response to Question Two:\n\nStrong convexity outside a ball is an assumption to ensure non-explosiveness of the solutions to the Stochastic Differential Equations. In other words, strong convexity is one of the sufficient conditions for the stochastic stability. In literature [dalalyan2019user,durmus2019high,chatterji2018theory,baker2019control] on unconstrained Langevin algorithms, convergence analysis in Wasserstein distance is conducted under the assumption of strong convexity over the whole $\\mathbb{R}^n$ domain. The paper [majka2020nonasymptotic] uses the term contractivity at infinity (which is the same as our assumption) to cover a wider class of SDEs such as equations with drifts given by double-well potentials to replace the global log-concavity assumption. \n\nThe dissipativity condition in [chau2019stochastic] and [raginsky2017non] is not equivalent to the strong convexity outside a ball assumption. Instead, the dissipativity condition is weaker. Here we present some algebraic manipulation to show that the relationship between these two sufficient conditions for stochastic stability.\n\nAssume $\\bar f(x)$ is $\\mu$-strongly convex outside a ball of radius $R>0$, i.e. $(x_1 - x_2)^\\top(\\nabla \\bar f(x_1) - \\nabla \\bar f(x_2)) \\ge \\mu \\|x_1 - x_2\\|^2$ for all $x_1, x_2 \\in \\mathcal{K}$ such that $\\|x_1 - x_2\\| \\ge R$. Since $0 \\in \\mathcal{K}$, we can let $x_2 = 0$ in the assumption.\nTherefore, we have $x_1^\\top(\\nabla \\bar f(x_1) - \\nabla \\bar f(0)) \\ge \\mu \\|x_1 \\|^2$ for all $x_1 >R$. This is to say $x_1^\\top \\nabla \\bar f(x_1)\\ge \\mu \\|x_1 \\|^2 + x_1^\\top \\nabla \\bar f(0) $ for all $\\|x_1\\| >R$. To ensure this inequality to hold, it suffices to have $x_1^\\top \\nabla \\bar f(x_1)\\ge \\mu \\|x_1 \\|^2 + \\|x_1\\|\\| \\nabla \\bar f(0)\\| $ for all $\\|x_1\\| >R$ by C-S inequality. Note that $\\|x_1\\|\\| \\nabla \\bar f(0)\\| $ is non-negative. Compared with the disspativity condition in \\cite{chau2019stochastic} and \\cite{raginsky2017non} (There exists $a,b > 0 $ such that for all $x \\in \\mathbb{R}^n$, $x^\\top \\bar f(x) \\ge a \\|x\\|^2 - b $). We can see that strong convexity outside a ball implies the disspativity condition if it is also constrained outside a ball. \n\n\nBesides, the strong convexity outside a ball assumption can be enforced using weight decay regularization, which is common in machine learning problems. More details can be seen in the appendix of [raginsky2017non]. \n \nOverall, strong convexity is a fairly standard assumption for the existing convergence analysis of Langevin algorithms in Wasserstein distance. We limit the strong convexity outside a ball so that within such a ball, the objective function can be non-convex. This is why we identify our work as non-convex learning. \n\n(6) Response to Question Three:\n\nWe will add the following paragraph to the beginning of Section 3.3:\n\nThis subsection describes the main ideas in the proof of Lemma 3. The results highlighted here, and proved in the appendix, cover the main novel aspects of the current work. The first novelty, captured in Lemmas 4 and 5, is a new way to bound stochastic gradient Langevin schemes with L-mixing data from a Langevin method with the data variables averaged out. The key idea is a method for examining a collection of partially averaged processes. The second novelty is a tight quantitative bound on the deviation of discretized Langevin algorithms from their continuous-time counterparts when constrained to a polyhedron. This result is based on a new quantitative bound on Skorokhod solutions over polyhedra.",
" Thanks for valuable comments. Below, we will respond to the weaknesses, limitations you pointed out and answer the questions you brought up.\n\n(1) Response to Weakness One:\n\nThough the analysis may look similar to that in [24], the problem we solve is specifically constrained in a polyhedral domain and with L-mixing random variables. Such a constraint set removes the boundedness assumptions, and the random variables under consideration are not IID. In [24], the constraint is convex compact, and the external random variables are IID. The author of [24] uses Lemma 2.2 of [34] (Tanaka, 1979) to bound the distance between the continuous-time process and the discretized process, which is rather loose. However, we are able to get a tight bound between the continuous-time process and discretized process with the constructive proof of the earlier result from [14] without boundedness assumption. The proof of Theorem 9 is one of the major novelties of this paper. The discussion of such a technical novelty appears in Section 4. Besides, instead of dealing with commonly considered IID random noise, we consider a class of L-mixing process as the external random variables, which is typical in system identification and time-series analysis. L-mixing random variables generalize the IID random variables and cover most of the stable Markov models. The consideration of such a kind of time-correlated random variables is another novelty of this paper. We present the technical novelties in the introduction part and also highlight them in the main paper.\n\n(2) Response to Weakness Two:\n\nWe will add the sub-optimality proof of Gibbs algorithms in the revision. This will show the the dependency of parameter $\\beta$ explicitly.\n\n(3) Response to Weakness Three:\n\n- It is true that Lemma 7 and Lemma 8 have undesired exponential error bounds shown as $e^{\\eta \\ell k}$. However, Lemma 7 and Lemma 8 are used to prove Lemma 3. And the proof of Lemma 3 shows how to completely eliminate the exponential dependency and get a time-independent bound of $O(\\eta^{-1/2} (\\log \\eta^{-1})^{1/2})$.\n\n- The work [30] only considers the IID external random variables, while our work deals with time-correlated external random variables (L-mixing processes) and the exponential dependency appears in the process of averaging out the external random variables. Hence, the result of the discretization error is not comparable.\n\n(4) Response to Question One:\n\nWe will add the definition of 1-Wasserstein distance in the revision. The definition of polyhedral set is defined in Section 4, which we believe is easier for the reader to keep track of the technical details due to the page limit.\n\n\n\n\n",
" Thanks for the valuable comments. Below, we will respond to the weaknesses, limitations you pointed out. \n\n(1) Response to Weaknesses:\n\n- Though the analysis looks similar to the prior work [24], we have a more general assumption on the external random variables which are assumed to be the L-mixing processes. The class of L-mixing processes is time-correlated and covers many applications in system identification and time-series analysis. In [24], IID random variables are considered, which are a special case of L-mixing processes. Besides, in [24] the constraint set is closed and bounded and during the proof, the diameter of a ball covering the constraint set is utilized. Instead, our work studies the closed polyhedral constraint, which is not necessarily bounded. A tighter bound of the discretized process and continuous-time process is obtained through a constructive proof of an earlier result from [14]. \n\n- This is a good point, that it may be possible to avoid the technicalities of Skorokhod problems by changing the algorithm. We will append the following sentence to the conclusion:\n\n ``Future work will examine whether the projection step, and thus Skorkhod problems, can be circumvented by utilizing different algorithms, such as those based on proximal LMC [5].''\n\n\n(3) Response to Limitations:\n\nWe will put explicit bounds for the main constants, $c_1,\\ldots,c_4$ in the main paper. As you mention, some constants will have a factor of the form $e^{\\beta \\ell R^2/8}$, which can be large, especially if $\\beta$ or $R$ is large. This type of scaling is likely unavoidable, since the Langevin algorithm gives near-optimal solutions to a general class of non-convex optimization problems. So, NP-hardness would imply that the method would scale badly in some instances. In this case, the scaling shows up in the constant factors. \n\n",
" The paper aims at deriving new bounds for langevin algorithms in some specific cases, showing that the bound of convergence it attains is better than previously obtained in the literature.\nIt focuses on obtaining new 1-Wasserstein distance for non-convex losses with L-mixing data variables and polyhedral constraints, showing that the rate of convergence is faster. The paper is technically well written and presents a novel bound for Langevin algorithms. The math is advanced and the proof technically difficult. \nFurthermore, the paper is well organized and the derivation is clear.\nI checked some of the proofs which were very involved and the resulting paper is impressive theoretically.\n\nThe main weaknesses of the paper are to my mind thelimitation to the specific case with L-mixing data variables and polyhedral constraints, which is Unfortunately not supported by a practical example to show the relevance of the approach proposed by the authors. My main question to the authors is can you find a practical case where the bound they obtain begins to be optimal, especially with respect to the rest of the literature ? Which would validate and motivate to my mind the impressive theoretical work you show. The limitations are we’ll described by the authors and are mainly the specificity of the results.",
" Authors study the problem of constrained sampling guarantees for the projected Langevin algorithm in a non-asymptotic sense in Wasserstein 1 metric. - Strengths:\n+The paper is well written and the proofs look correct as far as I can tell.\n+ Improves the previously known rates under milder conditions.\n\n- Weakness:\n+ Major concern 1: Lack of motivation in the solved problem.\n+ Major concern 2: it is not clear how the dimension and other problem parameters interact with the convergence rate. Since the authors are relying on Eberle's reflection coupling, the dimension dependency will be exponential, e.g. exp{LR^2}.\n+ Insufficient recent sampling literature review: Unconstrained sampling with Langevin algorithm is a well studied problem, especially under quadratic growth conditions that the current paper is making. Authors cite a few papers from 2017 referring to them as recent, yet there has been a lot of progress since then. \n\n- Minor concerns:\nI listed a few typos that I noticed below.\n+ l20 Deep Neural networks\n+ l107 missing \\nabla \n+l133 the interpolation process is not properly defined, so is the Brownian motion (and its filtration).\n+l85 before ever mentioning about f, we are introduced its gradient. \n\n Q1: What is the dimension dependence of this convergence guarantee?\n\nQ2: What is the motivation for sampling from the Gibbs potential \\bar{f}? Is this related to a Bayesian posterior sampling problem or any practical sampling problem? Is it only for obtaining non-convex optimization guarantees?\n\nQ3: What is the benefit of using Eberle's result instead of a functional inequality based approach such as LSI + Holley-Stroock as in Vempala&Wibisono if the dimension dependence is already exponential? See the section \"Strengths and Weakness\".",
" This paper studies the problem of non-asymptotic convergence guarantees in 1-Wasserstein distance for projected Langevin Monte Carlo with (unbounded) polyhedral constraints, non-convex losses and L-mixing data. The convergence rate $O(T^{-1/2}\\log T)$, is faster than the previous works on constrained Langevin algorithms in the literature. This is a solid theoretical paper and the analysis seems rigorous. This paper obtains the problem of non-asymptotic convergence guarantees in 1-Wasserstein distance for projected Langevin Monte Carlo with (unbounded) polyhedral constraints, non-convex losses and L-mixing data, with the convergence rate $O(T^{-1/2}\\log T)$ that is faster than the previous works on constrained Langevin algorithms in the literature. My understanding is that the paper has made the following contributions (1) the convergence rate is faster than the previous works for constrained Langevin algorithms (2) the continuous-time convergence rate in 1-Wasserstein distance extends some existing literature to allow reflected boundary (3) novel discretization error bound where the constraint can be unbounded. \n\nThere are several weaknesses of the paper that makes me question the technical novelty and the significance of the contributions of the paper.\n\n(1) Even though the results are new, similar analysis for non-convex learning for projected Langevin algorithms has already been appeared in [24] for example. The author(s) should do a better job at explaining what are the technical novelties and innovations that allow them to improve the convergence rate and allow the result to work on unbounded constraint set.\n\n(2) In the model, there is a scaling parameter $\\beta$, that is known as the inverse temperature. When you are doing sampling, you can simply let $\\beta=1$. But when you need to use Langevin algorithms to solve a non-convex optimization problem, you need to have large $\\beta$ in order for the Gibbs distribution to concentrate around the global minimizer of the target; see e.g. [30]. The authors at least should comment on the order of dependence of the constants on $\\beta$ and whether it is possible to allow $\\beta$ to be large to be used for non-convex optimization. Moreover, Theorem 1 is a result that can be used for sampling for non-log-concave target distribution, but can you use the ideas from [30] to obtain results for empirical risk minimization/population risk minimization? It is not clear to me because your result is in 1-Wasserstein and the constraint set is unbounded. In some sense, it is not clear to me your 1-Wasserstein result can lead to any non-convex optimization guarantees on an unbounded domain.\n\n(3) In Lemma 7, Lemma 8, the error bound has the term $e^{\\eta\\ell k}$, where $\\ell$ is the parameter of the $\\ell$-smoothness of the target, $\\eta$ is the stepsize and $k$ is the number of iterates. The authors should comment on how tight/good these bounds are. I am a bit worried that these are too loose. For example, to make Langevin algorithm work, you need the continuous time dynamics to be close to the invariant distribution, also known as the Gibbs distribution, which requires $\\eta k$ to be large. But if $\\eta k$ is large, then the error bounds in Lemma 7 and Lemma 8 are exponentially bound. The discretization error bound in [30] does not have exponential dependence on $\\eta k$ and it would certainly be great if the author(s) can improve upon this. (1) On page 2 and page 3, I suggest the author(s) to define 1-Wasserstein distance and the polyhedral just in case some of the less theoretical readers are not familiar with these concepts.\n\n(2) The authors should discuss or make some comments about the assumption in Section 2.4. You assumed that $\\bar{f}(x)$ is $\\mu$-strongly convex outside a ball of radius $R>0$. This can be confusing to some readers who are not familiar with such assumptions because on the surface it gives the readers the impression that you are assuming almost strong-convexity whereas in the abstract you are claiming that you are doing non-convex learning. Please provide some references and comments on whether this is a common assumption, and whether it is equivalent for example to dissipativity condition in the literature.\n\n(3) You mentioned that most of the novel work in the paper focuses on deriving Lemma 3 and you provided Section 3.3 for a proof overview for Lemma 3. But I think what would really be nice is to provide some high-level general discussions in plain English either before Section 3.3. or at the beginning of Section 3.3 about the novelty of your techniques to obtain Lemma 3 as well as the high-level descriptions of the steps needed to achieve Lemma 3.\n\n(4) Journal names etc. should be capitalized in the reference. For example, the journal name in [4] should be capitalized. Also in [6] and [9], langevin should be Langevin. Indeed, the author(s) included a section near the end of the paper discussing the limitations of their work. ",
" This paper gives W_1 convergence guarantees for the constrained Langevin algorithm with stochastic gradients over a polyhedral constraint set. The assumption on the target function is that it is Lipschitz everywhere and strongly convex outside of a ball; moreover, the stochastic gradients are not i.i.d., but rather are obtained from an L-mixing process (which quantifies the amount of dependence in the noise). Similar problems have been studied before on more general constraint sets, but in the case of a polyhedral constraint set this paper obtains the sharpest bound thus far. The main challenge in this work is the mathematical difficulty associated with studying SDEs reflected at the boundary. This paper makes a good contribution to a challenging problem. Technically, the analysis is quite similar to that of the prior work [24], which dampens the novelty. However, I believe that this is outweighed by the virtue of analyzing the projected Langevin algorithm, for which quite little is known. I also found the writing to be clear.\n\nAnother notable weakness is the lack of theoretical comparisons with other approaches, such as proximal LMC [5]. I understand that the settings considered in prior works may differ from the one considered here, but nevertheless it would be helpful to include a discussion on whether techniques in those other works can be expected to apply to this setting, especially since the other approaches avoid the technicalities involved in the Skorokhod problem.\n\nSpecific Comments:\n- Line 107, typo in the Lipschitz gradient assumption\n- Line 691 in supplement, typo: or -> of None. In the paper, quantitative dependence on problem parameters such as the dimension is not made explicit until the appendix. This is an issue because several parameters are quite large, e.g., there is an exponential dependence on the radius R of the ball without strong convexity, and an exponential dependence on the rank of the matrix specifying the polytope constraints. For the former, this is a standard feature of works which make the “strongly convex outside ball” assumption, but nevertheless such exponential dependence should be discussed clearly in the main text (especially since the abstract claims to handle non-convex functions, but realistically these bounds are quite poor for very non-convex functions). In particular, the contraction factor a is one of the most important parameters and an explicit bound on a should appear in the main text."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"sFp5nKQbQeR",
"LWRuIv1_Zaa",
"LWRuIv1_Zaa",
"3fVWpGPMLPY",
"RV0n2SvFb4n",
"RV0n2SvFb4n",
"z_B--AlSmpx",
"nips_2022_KxVSnZVuZZ",
"nips_2022_KxVSnZVuZZ",
"nips_2022_KxVSnZVuZZ",
"nips_2022_KxVSnZVuZZ"
] |
nips_2022_Jupoos_K4xt | Equivariant Networks for Zero-Shot Coordination | Successful coordination in Dec-POMDPs requires agents to adopt robust strategies and interpretable styles of play for their partner. A common failure mode is symmetry breaking, when agents arbitrarily converge on one out of many equivalent but mutually incompatible policies. Commonly these examples include partial observability, e.g. waving your right hand vs. left hand to convey a covert message. In this paper, we present a novel equivariant network architecture for use in Dec-POMDPs that prevents the agent from learning policies which break symmetries, doing so more effectively than prior methods. Our method also acts as a "coordination-improvement operator" for generic, pre-trained policies, and thus may be applied at test-time in conjunction with any self-play algorithm. We provide theoretical guarantees of our work and test on the AI benchmark task of Hanabi, where we demonstrate our methods outperforming other symmetry-aware baselines in zero-shot coordination, as well as able to improve the coordination ability of a variety of pre-trained policies. In particular, we show our method can be used to improve on the state of the art for zero-shot coordination on the Hanabi benchmark. | Accept | This paper proposes a test-time algorithmic modification to address a multi-agent coordination problem where agents choose incompatible strategies due to symmetries in the environment, showing that it is applicable to ZSC tasks like Hanabi. The proposed symmetrizer is applied to LSTM recurrent policies.
Reviews were mixed for this paper, and I really valued the in-depth discussion amongst reviewers and between reviewers and authors. I am recommending acceptance primarily based on the interesting discussion and new experimental results surfaced between reviewer Y7M6 and the authors:
1. The feedback from reviewer Y7M6 and authors increased my confidence in the statistical correctness of scores wrt baseline, and it is clear that the authors ensured that the baselines were computed fairly.
2. The rebuttal-revised version of the paper shows that the symmetrizer improves upon any self-play algorithm, such as OBL level 4 and 5. Reviewer Y7M6 points out "I think it would be better to claim that this work can improve state-of-the-art competitive models to become better instead of claiming this work achieves the new state-of-the-art ZSC performance in Hanabi. ". I believe the authors have shown that with their experiments on OBL L5.
The primary critique is that many of the scores were changed late in the review process, which does call into question whether the empirical rigor should be re-examined via another round of review. The authors have gone to great lengths to improve the rigour in the rebuttal. Beyond the specifics of empirical evaluation, I think the idea presented in the paper is interesting and worth sharing with the community: this paper presents an "explicit symmetrizer", whereas to my knowledge, most of MARL focuses on "implicit symmetrizers" like OBL, well-tuned MAPPO, data augmentation, neural fictitious play-like schemes. Furthermore, the technique appears to be quite complementary and can be combined with existing MARL algorithms.
Some asks:
- this paper uses a combination of training and inference time symmetrizers, so please make it clear in the final version of the paper how each of these contribute to the stated performance gains.
- it would be helpful to mention some of the statistical comments around "In our experience, a pool of 12 policies achieving 2.52 is particularly unlucky and is the result of several seeds that happen to completely misalign with all other policies." in an appendix, even if it's sort of anecdotal.
| train | [
"edgRxKdnWiQ",
"dfzwnXwC_eY",
"8H1dFmMA4dl",
"7tiQtRAYOCq",
"ClXUejf9Z3t",
"1A_HzTdnQryg",
"i9D_rdgrazr",
"etU0VRGpuQz",
"6QUHGwtjUt",
"Ui0iFgAOFWC",
"bH_s3PLaQ7r",
"Ohn3gFpFFLA"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comment and consideration.\n\nThe paragraph in Section 3 promising an empirical test was regarding testing whether the averaging scheme helped in improving ZSC (which we did find as EQC improves ZSC), not to comparing it to using $e \\in G$. We agree that a formal ablation comparing the two approaches would be interesting and valuable; in our own testing, we did find the averaging scheme tended to work better than the choice of $e \\in G$. In light of the discussion period ending in a few hours, we will add this ablation study for the camera-ready deadline.\n\nThank you again for your helpful criticisms.",
" Thank you for your response.\n\nI appreciate all the changes that you have made and I find the paper has improved. \n\nNevertheless, reading other reviews, I am growing particularly sensible to Reviewer VbG9's point about the extension of equivariance to LSTMs:\n\nIn your reply to their review, you mention addition of details in Section 3, which I find commendable, but then the said paragraph points to Section 4 for an empirical test of the choice that was made between the two regularisation approaches: \n\n- (i) averaging scheme, vs \n- (ii) using the identity element $e\\in G$ alone, \n\nI am realising that I initially understood that sentence wrongly ; it does not mean that you would propose an ablation study in Section 4 to compare the two regularisation approaches, but rather that you validate your choice of the averaging scheme (i) by showing that performance in that context is increased compared to baselines...\n\nYet, I would very much appreciate the paper to contain said ablation study between the two (or more) reuglarisation approaches for the hidden and cell states of the LSTMs in the architectuer, as I think it would be a fairly valuable contribution to make, and it is important to highlight it as a point where some questions remain open.\n\nStill, I will raise my rating of the paper in accordance to my initial review from 7 to 8, as I find the theoretical guarantees to be impactful and, following revisions, also sufficiently numerically evidenced.\n",
" We appreciate the comments and thank you again for your helpful criticisms.\n\n1. We maintain that the main claims (and the writing) have not materially changed (and see 2. below), which are that 1) equivariant networks exactly solve symmetry breaking; 2) EQC can be used as a policy-improvement operator. The background papers used for comparison have largely not changed (with the exception of additionally now considering the OBL policy population to further demonstrate the generality of EQC), we just now use the original pre-trained models from the relevant papers rather than training our own seeds of them.\n\n2. We rephrase the claim as you recommend. We note that we show in the paper that the difference between OBL + EQC and OBL is statistically significant using permutation testing.\n\n3. We apologize for the confusion. In Section 4.1, we show that despite using far fewer symmetries, we can still do better in ZSC than OP $\\textit{due to symmetrization}$. We now additionally clarify that the G-OP policies are symmetrized at inference time in the caption of Table 1, but note this was stated in Sections 3 and 4.1. The comparison between OP and the symmetrized policies is fair because both policies are given access to symmetries (and OP given access even to far more symmetries), and we compare how effectively each policy type leverages this symmetry access: specifically, we compare if additionally constraining wrt symmetries at the model-level (equivariance) leads to improvement in coordination beyond constraining wrt the same symmetries purely at the data-level (data augmentation). Such comparisons between equivariance and data augmentation in given domains are common in literature and the subject of active research (eg [1]); in essence, section 4.1 quantifies the improvement that is gained from constraining for symmetry at the model-level versus the data-level for the multi-agent coordination domain, where we found that equivariant policies are far more “symmetry efficient”.\n\nPlease let us know if we need to address anything else.\n\n[1] Gerken, Jan, et al. \"Equivariance versus Augmentation for Spherical Images.\" International Conference on Machine Learning. PMLR, 2022.",
" Thanks for your reply. Overall I think the paper is improving. Response below.\n\n1. While the overall thesis of the paper may still be similar to the initially submitted version, the claims of the paper and the ability to compare to other significant background papers that were used for comparison is fundamental, and these features are new to the revision. Even without considering any other edits, I believe this warrants the time to do another full review.\n\n2. Thank you for using OBL Level 5. I think it would be better to claim that this work can improve state-of-the-art competitive models to become better instead of claiming this work achieves the new state-of-the-art ZSC performance in Hanabi. The latter claim leans on this work's highest score which, according to this work, is only 0.12 above the baseline score (achieved by symmetrizing that same baseline) and the error bounds almost overlap. On top of that, this claim also relies on [A] having made a mistake.\n\n3. This reply confused me on G-OP training:\n\n - In the initial rebuttal, this was said in response to asking the differences between using G-OP training and using OP: \"We train the G-OP policies on small subgroups of all symmetries, ie small subsets of the symmetries OP trains on (the cyclic group of order 5 and the dihedral group of order 10). We show in Sec 4.1 that despite training on far fewer symmetries we can still do better in ZSC than OP.\" \n\n - However, in response to the follow-up question on why training on small sugroups performs better in ZSC than OP, this was said \"The G-OP trained agents are symmetrized at inference time with our proposed mapping to be made equivariant, as stated in Sections 3 and 4.1.\" \n\n - I'm now confused what the claim is here. Is it that training on fewer symmetries can do better in ZSC than OP or is it that symmetrizing at inference time is required to do better than OP? If it is the former, then what is the reasoning behind on why training on subgroups does better than OP as in Table 1? If it's the latter, then Table 1 should mention there is a symmetrizer used, and that does not seem like a fair comparison between these scores (if $C_{5}$-Equivariant and $D_{10}$-Equivariant get to use a symmetrizer at inference and OP does not). Also, if it's the case that training on fewer subgroups of symmetries by itself does not help over OP, then is the only technique presented in this work that gets an empirical benefit the inference-time symmetrizer?\n",
" Thanks for your comments.\n\n1. While we have updated the paper and scores to reflect the reviewers' comments and requests, the changes are not fundamental and the thesis of the paper is very much the same: Section 3 has merely been split into subsections for ease of reading, without any profound revision of content, and while the scores in Section 4 are updated to reflect comparison with pre-trained models in the literature, the body of writing and conclusions drawn from the scores are the same.\n\n2. We now report the scores with OBL Level 5 instead of OBL Level 4, and indeed still find EQC improves on OBL Level 5. We note that OBL can be iterated arbitrarily, and that the OBL paper itself only lists scores up to Level 4. We also note that the scores we found in cross-play are markedly different than what [1] reports: we found that OBL Level 5 attains cross-play scores of 23.77 ± 0.06 (slightly better than OBL Level 4), while [1] reports 24.2 ± 0.01. It is possible [1] confused cross-play scores with self-play scores.\n\n3. The G-OP trained agents are symmetrized at inference time with our proposed mapping to be made equivariant, as stated in Sections 3 and 4.1. Thus despite training on less symmetries, symmetrizing wrt these few symmetries still outperforms a policy trained on all Hanabi symmetries. We also see in Section 4.2 that symmetrizing OP policies wrt a small subgroup of symmetries significantly improves the OP policies.\n\nPlease let us know of any other queries.\n\n[1] Keane Lucas and Ross E. Allen. 2022. Any-Play: An Intrinsic Augmentation for Zero-Shot Coordination. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS '22). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 853–861.",
" Thank you for your response, as it did help me understand parts of your paper more. This paper has been significantly improved, especially by using baseline models that allow better comparison to related work. Some comments and a question:\n- It looks like many sections in the paper have been edited, there is a new claim of SOTA score, and all scores and percentages in the results section previously reported are significantly different than what was initially submitted, which may warrant a re-submission of this paper. \n- In my understanding, this revision includes a new claim of a new state-of-the-art score (of 23.93) on ZSC for the Hanabi benchmark environment. However, this work achieved this by building upon OBL Level 4 models, whereas OBL Level 5 models (also freely available from the same source) achieve a ZSC cross-play score of 24.2 as reported in [A]. It's possible that this work could build upon the Level 5 model to get a higher score, but the new state-of-the-art claim the current revision makes seems inaccurate.\n- G-OP training: In my understanding, one of the claims of this paper is that EQC outperforms OP (another symmetry-aware baseline). However, the only difference in training between the baseline OP training and this work's G-OP training is that this work limits training to subgroups of symmetries instead of all of them like Other-Play. What is the reasoning on why this would do empirically better or is superior to training with Other-Play?\n\n[A] Keane Lucas and Ross E. Allen. 2022. Any-Play: An Intrinsic Augmentation for Zero-Shot Coordination. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS '22). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 853–861.",
" We thank the reviewer for their review.\n\nQ0. Statistical tests\n\nThank you for your recommendation to include statistical tests to support the results.\nWe conducted Monte Carlo permutation tests to evaluate the improvement incurred by equivariance and have found that the improvement is indeed significant. We have included these findings in the paper. We opted for permutation tests rather than the two sample K-S test, as scores in Hanabi are discrete and not continuous. Additionally, we have redone our experiments to be more directly comparable with baselines in the literature. We have increased our computing resources to match those of other works and consequently achieved more competitive scores in Section 4. We additionally used pre-trained models supplied by other papers as baselines. In particular, we achieved a new state of the art on the Hanabi benchmark with OBL + EQC.\n\nQ2: The symmetrizer does not require all the symmetries of \\Phi to be known. The group G used in the symmetrizer must be fully known, but one can choose G to be a subgroup of \\Phi. We do not intend to claim that choosing a subgroup G of \\Phi is theoretically sufficient to prevent all symmetry breaking, but it is sufficient to prevent symmetry breaking wrt symmetries in G. We have rephrased this passage.\n\nWe do not consider S_5-equivariant agents in this work, as S_5 is already a group of fairly large order. The OP agents we compare against were trained with permutations from S_5, while the equivariant agents only with permutations from C_5 and D_10 respectively. Despite being trained on far fewer permutations, the equivariant agents perform better. We’ve made line 257 more clear as to what it’s referring to.\n\nQ1: Any group can be viewed as a group of permutations, or as a group of permutation matrices. This is known as the permutation representation of a group. Also, the terminology “permutation matrix” alludes to the computational implementation of group actions as matrix products, and saying “matrix” instead of “function” simply emphasises that the function is a linear map of the input. We now state this in the paper.\n\nLimitations\n\nWe have added to the Conclusion that our work focuses on the fundamental problem of correcting symmetry breaking, which is an important component but not the only component of coordination.\n\nRemaining comments\n\nWe have changed “class of symmetries” to “set of symmetries”.\n\nFollowing your suggestion, we expanded the proof of Proposition 3 in Section 3 to ease understanding.\n\nWe agree that “first approach” is somewhat ambiguous and have renamed “first approach” to “naive approach”.\n\nWe kindly thank the reviewer for their comments. The inclusion of the statistical tests as well as the refinement of the language has made the work a lot more convincing.\n",
" Thanks for your review.\n\nBaseline models\n\nThank you for pointing out a concern with not using pre-trained baseline models. We trained our own baseline models to ensure the models were directly comparable to the EQC models. In [1], the authors used more compute than in our experiments: they use 2 GPUs rather than 1 for simulation, which, as the authors say, “has a profound effect on reducing the wall clock time required to achieve competitive performance”. In our work, we used a single GPU for simulation. Thus, the EQC models trained with our resources were not directly comparable with those of [1]. In addition, it is worth noting there is a lot of variance when evaluating cross-play scores of different seeds, and 9.43 is an outcome within reason for SAD. In our experience, a pool of 12 policies achieving 2.52 is particularly unlucky and is the result of several seeds that happen to completely misalign with all other policies.\n\nIn light of your comments, we now use an additional GPU for simulation for our equiv agents. This ensures our EQC models can be properly compared to [1]. In Sec 4, we revised our experiments to use the pretrained baselines instead of our own trained versions. We find our equiv agents outperform the OP agents from [1] (Sec 4.1), and both the SAD and OP baselines from [1] benefit from symmetrization at inference time (Sec 4.2). We also add statistical tests to strengthen our results.\n\nComparison with OBL\n\nWe agree OBL[2] doesnt require explicit formulation of symmetries. But, OBL requires simulator/env access to run rollouts of simulated play, which is arguably an even stronger assumption. EQC can be used under the assumption of only partial symmetry access, where an agent can become equiv wrt a small subgroup of all symmetries (see Sec 4.1 and 4.2 with the cyclic and dihedral groups).\nAlso, iterating multiple levels of OBL is very expensive.\nUltimately, OBL and EQC address related but different issues and are thus complementary to each other: \nOBL avoids the emergence of arbitrary communication protocols independently of the presence of environmental symmetries. However, it was specifically developed for turn-based partially observable settings. \nIn contrast our work applies to any setting where symmetries are present, even if it is fully observable and players move at the same time. \nPractically speaking, EQC is an improved version of OP and can eg be used to solve the lever game in Sec 2.5, or utilise cheap-talk channels in iterated interactions; OBL fails in both of these settings.\n\nFinally, EQC is also a policy-improvement operator (Sec 4.2), and in light of your comments we have added results to the paper demonstrating that applying EQC to OBL improves OBL, thus achieving new SOTA results in ZSC on the Hanabi benchmark. We’ve clarified the comparison with OBL in the paper.\n\nLSTM equivariance\n\nIn our architecture, equivariance is ensured by averaging over all permutations of the observations and agent actions. We don’t conduct group actions on the LSTM hidden and cell states, as these need not be altered to ensure equiv in the observation-action spaces. The role of our modification to the LSTM layer (see Sec 3) is to keep performance in ZSC high while the network is transformed to an equiv one. All propositions in the paper account for LSTM nets. The proofs can be trivially adapted as follows: adding the hidden and cell states, h_t and c_t, as inputs along with the observations, x, and applying the group actions only to x. We opted not to add h_t and c_t in these proofs for ease of notation. We’ve made all this more explicit in Sec 3\n\nLever game\n\nTo find a payoff maximising policy during selfplay training, SP converges to a policy that picks one of the 1.0 levers, since during joint training agents can consistently break the symmetry between levers. At test time, this fails with novel partners who may have converged on different 1.0 levers. If we constrain ourselves to equiv policies while maximising payoff, we will get the policy that picks the 0.9 lever, because an equiv policy that picks 1.0 levers must choose uniformly at random to preserve equivariance, resulting in an expected return of 0.11, compared with 0.9 for choosing the 0.9 lever. We have clarified this in Sec 2.5\n\nSymmetries during training\n\nWe train the G-OP policies on small subgroups of all symmetries, ie small subsets of the symmetries OP trains on (the cyclic group of order 5 and the dihedral group of order 10). We show in Sec 4.1 that despite training on far fewer symmetries we can still do better in ZSC than OP. Also, in Sec 4.2 we show we can symmetrize any policy, including OP, at inference time with EQC and still improve ZSC, which is significant\n\nWe thank the reviewer for their help in making the paper significantly stronger and more clear. Please let us know of any further comments/questions that would need to be addressed before you feel comfortable raising the score\n\n[1] OP\n[2] OBL",
" We thank the reviewer for their comments.\n\nOur main contributions are 1) showing that the theory of equivariant networks exactly solves symmetry breaking, which is a fundamental problem in multi-agent coordination, and 2) demonstrating that EQC, our method, can be used as a policy-improvement operator for any generic policy, which we validate on a highly challenging multi-agent benchmark (Hanabi). We agree that the architectural choices we propose for adapting LSTMs to an equivariant structure capable of multi-agent coordination are of note as well. We have further clarified the contributions.\n\nRegarding your listed weakness that Section 3 is hard to follow, we thank you for your suggestion and have split the sections’ topics into subsections for ease of reading (3.1 Architecture, 3.2 A Theory for Symmetry and Coordination, 3.3 Algorithmic Approach), as you recommended.\n\nRegarding your questions:\n\n(1) + (2) LSTM Architecture modifications\n\nThe LSTM’s hidden and cell states need not be altered in order to ensure equivariance: this is because in our architecture we average over all symmetry permutations of the observations and agent actions, which ensures equivariance. If at every timestep we input random tensors for the hidden and cell states, our proposed architecture dictates that the network would still be equivariant in the observation-action spaces. The role of our architectural modification to the LSTM layer is to keep performance in zero-shot coordination high while the network is transformed to an equivariant one. All propositions in the paper indeed account for LSTM nets, where the proofs can be trivially adapted as follows: adding the hidden and cell states, h_t and c_t, as inputs along with the observations, x, and applying the group actions only to x. We opted not to add h_t and c_t in these proofs for ease of notation. We’ve made all this more clear in the updated version of the paper.\n\t\n(3) Regarding asymptotic performance\n\nIn [1], the authors find that equivariance outperforms data augmentation in supervised learning even after increasing model size for data augmentation. \n\nIn RL, increasing the size of the model can hurt training: for example, [2] turns a set of supervised learning image tasks into RL tasks, then compares the performance of a simple CNN with ResNet-18 (successful architecture in supervised learning tasks). In their experiments, the simple CNN outperforms the ResNet-18. This is evidence that training deep models with RL is difficult.\n\nEquivariance outperforms data augmentation in supervised learning, even after increasing model size for data augmentation; and training deeper/bigger models with RL is difficult (as stated above). We would therefore not expect data augmentation (OP) to outperform equivariance (EQC) in our setting as we increase the model size.\n\n(4) The only modification to the LSTM is the one mentioned in line 147. We found good performance in coordination with only this modification.\n\n(5) We thank the reviewer for pointing out opportunities for improving the clarity of our paper. The batchsize we use is 128 and each sample in the batch is permuted with a randomly selected group element. We have clarified this in the paper.\n\n(6) We assume this means the hyperparameters used for training, which are mostly set via empirical finetuning, as well as by using the hyperparameters used in prior works such as [3, 4] as references. You may refer to the appendix for the complete list of hyperparameter choices. Please clarify the meaning of “network parameter”.\n\nRegarding prior knowledge of symmetries: our work does not require access to all symmetries. An agent can be equivariant with regards to a small subgroup of the group of all symmetries while achieving improvement in coordination (see Sections 4.1 and 4.2). Assumptions of some form are made in nearly all works on zero-shot coordination. For example [6] makes the strong assumption of access to the simulator (environment). Deriving symmetries is an interesting direction for future work. Works such as [5] exist as possible means to derive symmetries and relax the a priori access assumption. \n\n\nFinally, we would like to inform the reviewer that we have incorporated new results in our paper that demonstrate EQC improves on the state of the art for zero-shot coordination on the Hanabi benchmark.\n\nWe thank the reviewer for helping significantly improve the paper.\n\n[1] Equivariance vs Augmentation for Spherical Images, Gerken et al 2022\n[2] Natural environment benchmarks for RL, Zhang et al 2018\n[3] Other-Play, Hu et al 2020\n[4] Simplified action decoder, Hu et al 2020\n[5] Quasi-equivalence discovery for ZS-emergent communication, Bullard et al 2021\n[6] Off-belief learning, Hu et al 2021",
" The authors propose to use equivariant networks to handle the symmetry-breaking problem in multi-agent coordination settings. To generate an equivariant history-dependent policy, the authors present the equivariant LSTM structure and validate its performance in Hanabi games. Strengths:\n\n - I found the work interesting and believe the main technical contributions are (1) extending an equivariant MLP to an equivariant LSTM structure, (2) leveraging the permutation group while prior work mainly leverages the SO(2), (3) extending the equivariant net to game applications.\n\nWeaknesses:\n\nOriginality:\n\n - Although the extension to the equivariant LSTM structure is interesting, I found the paper does not have an analysis tailored to LSTM (see the first two questions in the clarification part). Thus the contribution of this paper is not significant to me, but I look forward to the answers to the clarification questions below.\n\nClarification:\n\n - I found the Section 3 Method hard to follow. It could be improved by listing several subsections instead and highlighting the main contribution as extending the equivariant neural net to the LSTM setting.\n About contribution and soundness: \n- In section 3 line 147, the authors proposed a modification for LSTM layers, which is interesting. However, how is the modification here related to propositions 3 and 4 in the following line 155 and line 160? I assume that propositions 3 and 4 are not tailored for LSTM nets. \n - A follow-up question is how the modifications in line 147 affect the equivariance of the proposed LSTM network?\n - What is the effect of the size of the network on the asymptotic performance? I assume that with a larger network and adequate data, the OP may have better performance. The equivariant net for sure helps improve the data efficiency but not necessarily the asymptotic performance. \n\nAbout writing and clarification:\n - The network modification is mysterious. Does the modification to LSTM only include the one mentioned in line 147?\n - In option 1 in line 177, the authors mention a mini-batch, is the batch size equal to 1? What is the effect of the group element batch size? \n - How is the network parameter selected in this paper?\n\nMinor issues:\n - Figure 1 is not referred to in the main content.\n\n The authors addressed one limitation which is the prior knowledge of the group symmetry, which I think is crucial. ",
" This work approaches the zero-shot-coordination (ZSC) problem by explicitly restricting the learnable (or act-able) policies to only those that do not rely on known asymmetric convention in the environment. They do this by using a 'symmetrizer' that can be used to make any regular trained policy equivariant post-training or be used during training to train a network to act more equivariant (although a symmetrizer can still be used with this network on inference as well). Specifically, this work seeks to outperform prior symmetry-aware ZSC algorithms on the benchmark environment Hanabi. Strengths\n- This paper extends equivariance to recurrent neural networks\n- This work is well written, especially considering the complexity of the material discussed\n- This work does seem like a further mathemtical advance on eliminating symmetries similar to how Other-Play approaches ZSC\n\nWeaknesses\n- The original models for the basline are freely available, but this work chooses to train their own models. This adds another difference that makes it harder to compare to the baseline and calls into question the accuracy of the results.\n- This paper seems to present an incrementally better solution towards dealing with explicitly defined symmetries in an environment, but more recent work (such as OBL [21]) does not require this explicit definition.\n- I am uncertain of any empirical advantages it provides with the current state-of-the-art in ZSC. For example, Off-Belief-Learning [21], like this work, is also shown to eventually converge to the same policy regardless of the initialization, but achieves better results (20.85-23.76 cross-play) than this work without needing to explicitly define symmetries [21]\n- This work chooses the same set of symmetries as Other-Play, and seems to train the neural networks the same or very similar as Other-Play to be invariant to these symmetries.\n- In my understanding, this work trains its own versions of the baseline models, which seem to get very different scores than the original baseline papers report. However, these original papers publish already trained models that could be used (more details in questions) - This paper reports results for the baseline in Section 4.1 that are significantly different than the reported results in the baseline papers [19] and [20]. For example, the cross-play scores of Other-Play [20] in this work are 11.12, in the original OP paper they are 15.32 or 22.07 (with or without the auxilliary task, respectively) [20] and in this work SAD cross-play is reported as 9.43 in Table 2 but is reported as 2.52 in the original paper[20] (Table 1). What is the reason for these differences?\n\n- How do you construct group actions suitable for cell state and hidden state i the LSTM as discussed in Proposition 2 in Section 3? In my understanding, the different dimensions in these states do not normally correspond to known observations or actions.\n\n- If possible, I would appreciate more of a walkthrough in why the $\\pi^{\\text{equiv}}$ policy in section 2.5 will always choose the 0.9 lever.\n\n- On page 5, what are the differeneces between the strategy of using \"G-OP\" and using Other-Play? Is it that Other-Play trains with more permutations? Yes\n\nPost-rebuttal change:\nIncreased score from 3 to 4 to reflect improvement of paper in rebuttal. However, as discussed elsewhere, I still have concerns on the magnitude of the changes in the revision and remaining uncertainty on experiment methodology and comparisons.",
" In cooperative MARL, following training with a given set of partners, it is important that the train agents be able to zero-shot coordinate with novel partners. \nA common failure mode, entitled symmetry breaking, happens when the trained agents learned on out of many equivalent (thanks to the game's symmetries) but mutually incompatible policies, yielding high performance with training partners but very low performance with novel partners.\n\nTo solve this failure mode, this paper proposes to build equivariant network architectures (assuming agents are modelled with deep neural networks, it means that \"symmetric changes to their observation cause a corresponding change to their output\"), in order for agents in Dec-POMDPs to learn policies that do not break symmetries.\n\nPrevious papers, like Other-Play which proposes an optimization algorithm that trains \"agents to maximize expected return when randomly matched with symmetry-equivalent policies of their training time partners\" via data augmentation, do not have equivariance guarantees.\n\nInstead, this paper proposes Equivariant Coordinator (EQC) which has the following features:\n- it comes with theoretical equivariance guarantees - the paper provides mathematical and numerical evidences; \n- it encodes the constraint of equivariance over the game's symmetries \"not in the data, but in the network itself\".\n- it can be applied in two ways, either during training time (via a tractable approximation) or can be used as a policy-improvement operator at test-time (without approximation).\n\nComparing EQC with state-of-the-art and baseline approaches in zero-shot coordination (like OP, SAD, VDN) shows better performance, independantly of the way it is applied. # Originality :\nTo my knowledge, the paper is the first to propose to build equivariant network architectures to encode environment's symmetries, in the context of zero-shot coordination.\n\n# Quality : \nThe quality of the paper is high, the maths seems very sound to me, and the experiments are compelling and well-rounded, as far as I can see.\n\nReproducibility seems strong to me.\n\nNevertheless, there is one quality/soundness issue around the strength of the numerical evidence:\n--> line 287 : Given the size of the standard deviations, I would appreciate that the authors perform a more thorough investigation using statistical tests to have a quantitative measure of the extent with which the hypothesis is actually supported by the results presented. For instance, using two-sample Kolmogorov-Smirnov tests between each condition's distribution, with the alternative argument set as 'less' or 'greater' [1].\n\n[1] : SciPy's two-sample Kolmogorove-Smirnov test : https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.html\n\n\n# Clarity :\nI have found the paper to be mainly well-written and easy to follow.\nThat being said, I would invite the authors to address the following, mainly to increase accessibility and completeness of the paper:\n\n--> line 102 & 140 : Clarity : I feel I need more details on the nature of $\\Psi$, why defining it as a **class** of symmetries rather than a set of symmetries, for instance? It may be obvious but I am missing the point, I am sorry.\n\nCould this choice be influenced by the fact that $L_g$ and $K_g$ /$K_g^{-1}$are denoted as\"permutation matrices\" rather than functions? (line 140)\nTheir usage in Eq.2 would invite for a functional definition, if I understand correctly. as they operate on $x,\\psi(x)\\in (\\mathbb{R}^d)^2$ ...\n\n--> line 157: Clarity : I would invite the authors to provide an easier reading experience by detailing that proof:\n\n$$\n\\begin{align} \n\\pi ^ 1(a | \\tau ) &= \\pi^1( \\phi\\circ\\phi^{-1}(a) | \\tau ) & \\text{from } id=\\phi\\circ\\phi^{-1} \\\\\n&= \\pi^1(K_{\\phi}\\circ\\phi^{-1}(a) |\\tau) & \\text{by identification}\\\\\n&= \\pi^1( \\phi^{-1}(a) | L_{\\phi}(\\tau) ) & \\text{from Eq.2}\\\\\n&= \\pi^1(\\phi^{-1}(a) | \\phi(\\tau) ) & \\text{by identification}\\\\\n&= \\pi^2(a|\\tau) & \\text{from }\\pi^1=\\phi(\\pi^2)\n\\end{align}\n$$\n\n--> line 200 : Clarity : the denomination \"first approach\" is ambiguisly used, maybe? Isn't G-OP the first approach, while the second approach is to use the *symmetrizer* $S$ on SP-trained $\\psi$ and deploy $S(\\psi)$?\n\n--> line 220-221 : Clarity/Soundness : I am not sure I can agree with the last sentence: I do not think it is a fair description to say that the present method does not require G to be fully known, as the symmetrizer relies on it to be fully known, no?\n\nI agree that, in practice, the paper shows numerical results that a subset of all the environment symmetries ***may*** be sufficient, but the theoretical evidence requires full knowledge of them, if I understand correctly.\nTherefore I would invite the authors to rephrase here.\n\n--> Table 1 : missing result in Table 1, for S5-equivariant agents, maybe? Line 257 seems to refer to it but those results are not in the table.\n\n\n# Significance :\nI think the paper is rightfully placed in the literature and very significant to the field of machine learning as zero-shot/human coordination is a very important problem for the field going forward. \nWhile the theoretical evidence are, in my viewpoint, very significant, I must confess that I am left disappointed with the numerical results not providing, at first glance, a very significant improvement to the state-of-the-art.\nThus, I am hoping that the statistical analysis that I am inviting the authors to perform will provide a compelling statistical significance argument to the numerical results, hopefully.\nIf not, then at least readers will know that there is still legitimately some work to do here.\n\n Q0 : Maybe the most important point for me, could you provide statistical analysis of the significance of the numerical results, possibly using two-sample KS tests as detailed above, please?\n\nAddressing this would allow me to raise up my appreciation of the paper.\n\nQ2 : Could you rephrase around line 220 in order to clearly differentiates the theoretical needs and guarantees from the practical ones, please?\n\nQ1 : Could you clarify the denotation of \"permutation matrices\" as opposed to functions, maybe, as detailed above, please? I believe the authors have mainly adequately addressed the limitations and potential negative societal impact.\nNevertheless, I would welcome the paper from investigating and/or discussing further the possible limitations that are the root cause of the numerical results falling short of solving the benchmark, maybe?\n\n\n## Post-Rebuttal Changes:\n\nI am raising my rating of the paper from 7 to 8.\nIndeed, following the revisions made by the authors, I find the theoretical guarantees to be now both impactful and also sufficiently numerically evidenced.\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"dfzwnXwC_eY",
"i9D_rdgrazr",
"7tiQtRAYOCq",
"ClXUejf9Z3t",
"1A_HzTdnQryg",
"etU0VRGpuQz",
"Ohn3gFpFFLA",
"bH_s3PLaQ7r",
"Ui0iFgAOFWC",
"nips_2022_Jupoos_K4xt",
"nips_2022_Jupoos_K4xt",
"nips_2022_Jupoos_K4xt"
] |
nips_2022_ylila4AYSpV | Simple Mechanisms for Welfare Maximization in Rich Advertising Auctions | Internet ad auctions have evolved from a few lines of text to richer informational layouts that include images, sitelinks, videos, etc. Ads in these new formats occupy varying amounts of space, and an advertiser can provide multiple formats, only one of which can be shown.
The seller is now faced with a multi-parameter mechanism design problem.
Computing an efficient allocation is computationally intractable, and therefore the standard Vickrey-Clarke-Groves (VCG) auction, while truthful and welfare-optimal, is impractical.
In this paper, we tackle a fundamental problem in the design of modern ad auctions. We adopt a ``Myersonian'' approach and study allocation rules that are monotone both in the bid and set of rich ads. We show that such rules can be paired with a payment function to give a truthful auction. Our main technical challenge is designing a monotone rule that yields a good approximation to the optimal welfare. Monotonicity doesn't hold for standard algorithms, e.g. the incremental bang-per-buck order, that give good approximations to ``knapsack-like'' problems such as ours. In fact, we show that no deterministic monotone rule can approximate the optimal welfare within a factor better than $2$ (while there is a non-monotone FPTAS). Our main result is a new, simple, greedy and monotone allocation rule that guarantees a $3$ approximation. In ad auctions in practice, monotone allocation rules are often paired with the so-called \emph{Generalized Second Price (GSP)} payment rule, which charges the minimum threshold price below which the allocation changes. We prove that, even though our monotone allocation rule paired with GSP is not truthful, its Price of Anarchy (PoA) is bounded. Under standard no-overbidding assumptions, we prove bounds on the a pure and Bayes-Nash PoA. Finally, we experimentally test our algorithms on real-world data. | Accept | Reviewers agreed that rich ad auction is significant and are excited about theoretical bounds on the positive result (achieved by a simple mechanism) and the negative result. Overall, this is a solid theoretical paper on an important and classical problem in industry. | train | [
"R9BnwKNK-TR",
"t889ymFq-Q0",
"70rhPMl1A4ZO",
"-jeO7TpSOzM",
"hRg6V2Kpde6N",
"l3hPfNeQb8",
"fsWYuTDoeFD",
"hdpGfIO31zz",
"HxvBaw6MWf5",
"8qQcmtpSwt",
"xSptrXWA4Ld"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The reviewer rnSK is indeed correct. The time reported in Table 1 is the mean time in seconds. We will correct it to be milli-seconds in the final version.",
" I think Figure 4 in the appendix is more consistent with what I would have expected. It has the median time for VCG between 10 and 20 msec. But Table 1 lists the average (presumably mean) as 0.03 msec. This is 3 orders of magnitude different! So unless there is something quite different between the setups something feels off. Or maybe Table 1 is actually reporting seconds?",
" Thank you for your review and constructive feedback. Below we answer your question.\n\n- “I am not familiar with this area. I am confused with the claim in the abstract that \"We prove that, even though our monotone allocation rule paired with GSP is not truthful, its Price of Anarchy (PoA) is bounded.\". This seems create a contradiction. Since they provide a monotone allocation rule in this paper and also shows in Lemma 1 that a monotone allocation with the corresponding payment rule will lead to truthful auction. Will this lead to a truthful auction?”\n\nWe prove that a monotone allocation rule can be paired with an appropriate payment rule to obtain a truthful auction. However, not every payment rule leads to truthfulness. And, in our case, GSP is not the appropriate payment rule; the appropriate payment rule for truthfulness is stated in Lemma 1 (page 5, lines 229-230).\n\nFor a similar example, consider the single item (i.e. single-parameter) setting and the monotone allocation rule “give the item to the agent with the largest bid”. Paired with the payment rule “if you get an item, pay the second highest bid” the overall auction is truthful. However, when paired with the payment rule “if you get an item, pay your bid”, the overall auction is not truthful.\n",
" Thank you for your review and constructive feedback.",
" Thank you for your review and constructive feedback. We’d be happy to include a short discussion on reserve prices/revenue maximization in the main body of the final version of the paper. Below we answer your question.\n\n- “Part of the reason these approaches are needed is the intractability of VCG, but in the experiments VCG seems faster than I would have expected (0.03 msec). Is the data used for the experiment nicer in some way than average? If not why is this performance of VCG inadequate?”\n\nThe data does contain some easy instances where the number of available advertisers is small and VCG runs very fast (See figure 4 in the appendix). While VCG is only 10 times slower on average, on the 50th percentile queries VCG can take upwards of 20 msec while it is less than 5 msec for our algorithm. Also note that the greedy allocation rule is very fast (See figure 6 in the appendix) and some of the latency comes from calculating the Myersonian payment. In short, Greedy with Myersonian or GSP payments is much faster than VCG. In practice, latency constraints are pretty tight and even queries in the tail need to be fast to provide a good experience to the end user.\n",
" Thank you for your review and constructive feedback. Below we answer your questions.\n\n- “Can you say a few more words about how to apply “Myersonion payment function” to the experiments?”\n\nLet us clarify. By “Myersonian payment function” we mean the payment rule which, paired with our monotone allocation, gives an overall truthful auction (see Lemma 1, page 5, lines 229-230; and also note that this payment rule is very similar but not identical to the standard Myerson payment for single-parameter problems). We explain how to do this computation in Appendix G (lines 1012-1015). This computation is not as simple as computing the generalized second price (GSP), but, in the experiments, our auction is still much faster compared to VCG.\n\n- \"The results are not surprising, and most techniques are standard, including the Myerson Lemma, knapsack algorithm, and the way to analyze the price of anarchy.\"\n\nNote that the underlying optimization problem is not knapsack but multi-choice knapsack, where the optimal fractional algorithm does not allocate in bang-per-buck order but instead allocates in incremental-bang-per-buck order. While the incremental-bang-per-buck algorithm does provide a 2-approximation, it is not monotone (which is necessary for our approach to give a truthful auction). We instead show that the bang-per-buck algorithm is monotone and that, mixing with the highest value ad, provides a good approximation to the optimal fractional allocation. Finally, the proof of this approximation guarantee (proof of Thm 3) is more involved than the 2-approximation for knapsack/multi-choice knapsack.\n\nFor the Price of Anarchy result, indeed the blueprint of the proof (smoothness) is standard. However, the difficult part, bounding the utility after deviation, must be done from scratch for every given setup/algorithm. And, because of the randomization in our algorithm, getting such bounds is further challenging here.\n",
" We would like to thank all reviewers for their constructive feedback. We will incorporate the valuable suggestions from all reviewers in the final version of this paper. Since there are no concerns/questions that are common across reviewers we answer each reviewer’s questions in separate, independent responses below.",
" In this paper, the authors consider the “Rich Advertising Auctions”, where advertisers can opt in or\nout of showing different extensions with the ads, which also makes the setting not single-dimensional. The objective is to design truthful mechanisms to maximize social welfare. \n\nAlthough the VCG mechanism is truthful and maximizes social welfare, it involves solving an NP-complete problem which is thus not efficient. \n\nThe first result set in this work is that the authors show no deterministic monotone rule can approximate the optimal welfare within a factor better than 2, and they design a simple greedy and monotone allocation rule that guarantees a 3 approximation. This algorithm combines a greedy procedure using the bang-per-buck order, and a randomizing between this procedure and the largest value ad. \n\nThe second result is to consider the Generalized Second Price (GSP) payment rule, which charges each advertiser the marginal threshold below which their allocation changes. The authors show that this mechanism is not truthful, but the resulting price of anarchy is bounded, under the standard assumption of no-overbidding. \n\nThe authors also provide an empirical evaluation of the designed mechanism on real-world data.\n Strengths:\n\nAs far as I check, the results are correct. The paper is well written, and I can follow most parts easily. By the way, I actually think that most of the footnotes include quite important information for readers to understand the problem, which is not necessary to put in footnotes.\n\nAnother issue is about the presentation. I think some proofs in the main body can be sacrificed in exchange for a section of PoA, including the definitions there, like GSP and PoA. I think they are quite important for the authors to understand and appreciate the results. \n\n\nWeaknesses:\n\nI am not an expert on this work, and thus sought help from a colleague who works on Bayesian mechanism design. The comments are as follows. The new model is novel, which can be interesting to be studied. The results are not surprising, and most techniques are standard, including the Myerson Lemma, knapsack algorithm, and the way to analyze the price of anarchy. The results are good to be published, but may not be NeurIPS. \n\nI kind of agree with the colleague, but with low confidence. Although I do not have enough knowledge of recent progress in this area, some key ideas used in this paper are indeed straightforward. For example, the randomization and the bang-per-buck order procedure are widely used in knapsack-related problems. The analysis of PoA is also not surprising if I did not miss the technical obstacles here; It bounds each advertiser’s utility gain from deviation but we need to carefully take knapsack constraint into consideration. \n\nOverall, I think this paper is not exciting but can be accepted if space is allowed.\n Can you say a few more words about how to apply “Myersonion payment function” to the experiments? None",
" The basic model of search advertising auctions is by now well understood, but industry has long faced a practical challenge when adapting this model to rich formats that can show multiple versions of each add of different sizes and with different combinations of decorations. The resulting combinatorial problem has been challenging from both computational and incentive perspectives. This paper proposes a simple greedy approach which is a variant of the optimal fractional algorithm. This forms the heart of a truthful mechanism which is shown to have both provable guarantees and strong empirical performance. Strengths:\nWhile the rich ad problem has received some prior attention in the literature, this is the first paper I have seen which presents what I would consider a largely “complete” solution. The main theoretical result (Theorem 3) is compelling and supported by a number of other results which add to the richness of the paper. I also appreciate that the analysis goes beyond simply achieving the theoretical bounds but discusses several heuristic improvements that do not affect the theory but improve performance in practice. The empirical results provide a convincing demonstration of the benefits of the approach by showing strong performance relative to the theoretically intractable and practically 10x slower VCG.\n\nWeaknesses:\n- Results in the main paper feel somewhat cramped. The paper does a reasonable job trying to fit things in within the page limits, but it does mean that quite a bit of the richness only really shows up in the appendix.\n- As is sadly often the case in this space the reproducibility of the work is limited due to the commercial sensitivity of advertising datasets. Nevertheless, some work has done a better job of at least defining models for generating synthetic data and running experiments on it as well to provide somewhat greater reproducibility.\n- The one missing piece that led me to describe the solution as only “largely” complete is the omission of reserve pricing, which is crucial in practice. This is acknowledged in Appendix G, and as mentioned the paper is already quite full, but I would have hoped for at least some discussion of how reserve prices can be reasonably applied since this doesn’t seem immediately obvious and I could imagine multiple ways to apply score-based reserves (raw score vs score-per-buck and per-advertiser vs per-variant).\n Part of the reason these approaches are needed is the intractability of VCG, but in the experiments VCG seems faster than I would have expected (0.03 msec). Is the data used for the experiment nicer in some way than average? If not why is this performance of VCG inadequate? See weaknesses",
" The authors introduce a rich ads auction problem which can be considered as a special case of the well-known MULTI-CHOICE KNAPSACK problem. If the designer allocates ads using the bang-per-buck idea directly, there is an example that shows the allocation is not monotone. They show that no deterministic monotone rule can approximate the optimal welfare within factor 2. Their main result is a greedy allocation mechanism that approximates at least one-third of the optimal welfare. There are some different versions of rich ad problems. The authors answer an open problem in [DSYZ10]. Both positive and negative results look strong to me. The mechanism is easy to implement in reality and the performance in the experiment is quite good. Overall, the paper is well-written. \n\nNo obvious weakness is found. I have no questions. There is no potential negative societal impact. ",
" The paper study the internet as auctions problem with the advertiser has a rich set of ads with different value and space requirement. They adopt the `Myersonian'' approach and study allocation rules that are monotone both in the bid and set of rich ads. The shows that the proposed a new, simple, greedy and monotone allocation rule that guarantees a three approximation as compare the optimal. # Strengths\n1. the study problem is interesting.\n2. Theoretical results is provided.\n3. Experimental results is conducted.\n\n# Weakness\n1. I am not familiar with this area. I am confused with the claim in the abstract that \"We prove that, even though our monotone allocation rule paired with GSP is not truthful, its Price of Anarchy (PoA) is bounded.\". This seems create a contradiction. Since they provide a monotone allocation rule in this paper and also shows in Lemma 1 that a monotone allocation with the corresponding payment rule will lead to truthful auction. Will this lead to a truthful auction ?\n\n2. The uploaded pdf version is obscure when zoom in.\n\nSince I am not familiar with this area, I am willing to align my score with other reviewers if they are experts in this area. see weakness above. No"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
4,
1
] | [
"t889ymFq-Q0",
"hRg6V2Kpde6N",
"xSptrXWA4Ld",
"8qQcmtpSwt",
"HxvBaw6MWf5",
"hdpGfIO31zz",
"nips_2022_ylila4AYSpV",
"nips_2022_ylila4AYSpV",
"nips_2022_ylila4AYSpV",
"nips_2022_ylila4AYSpV",
"nips_2022_ylila4AYSpV"
] |
nips_2022_Yb3dRKY170h | One-shot Neural Backdoor Erasing via Adversarial Weight Masking | Recent studies show that despite achieving high accuracy on a number of real-world applications, deep neural networks (DNNs) can be backdoored: by injecting triggered data samples into the training dataset, the adversary can mislead the trained model into classifying any test data to the target class as long as the trigger pattern is presented. To nullify such backdoor threats, various methods have been proposed. Particularly, a line of research aims to purify the potentially compromised model. However, one major limitation of this line of work is the requirement to access sufficient original training data: the purifying performance is a lot worse when the available training data is limited. In this work, we propose Adversarial Weight Masking (AWM), a novel method capable of erasing the neural backdoors even in the one-shot setting. The key idea behind our method is to formulate this into a min-max optimization problem: first, adversarially recover the non-robust perturbation patterns and then (soft) mask the network weights that are sensitive to the recovered patterns. Comprehensive evaluations of several benchmark datasets suggest that AWM can largely improve the purifying effects over other state-of-the-art methods on various available training dataset sizes. | Accept | The paper presents a method for defending deep neural networks against backdoor attacks, i.e., attacks that inject “triggered” samples into the training set. The method can be seen as an improvement on Adversarial Neuron Pruning (ANP) that uses (i) soft weight masking (SWM), (ii) adversarial trigger recovery (ATR) and (iii) sparsity regularization (SR). The main focus of the paper is in the low-data regime, especially in the one-shot setting and when the network size is small.
The authors have clarified the novelty of the approach wrt to ANP and have provided additional experiments addressing some of the reviewers' concerns. In view of this, some of the reviewers raised their scores. However, there are still concerns regarding the novelty of the method and the difficulty of setting hyperparameters. The empirical results seem solid, however.
| val | [
"DEUtoVWNJ-W",
"IWtMfuQD1H",
"wVU3zTsZSwy",
"1YTTLZLjr8",
"cz9mF2Kk2nB",
"Yx-2LrJE01",
"HnHkHlzSSpk",
"7f93mML3WAp",
"9eYgULnge3A",
"RUGmjYqQcLA",
"fR5gcQLmpIX3",
"XxYEKfzDGim",
"HKDd_k3hzl",
"oSRjdWiZdcp",
"akJw83dFs2h"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply and raising your score. Yet we would like to further clarify some points in your last comment and hope to address your concerns on the contribution and hyperparameter search.\n\nIn terms of our contributions, we respectively disagree with you. Our method is *NOT* “a heuristic variant of ANP”. The proposed method is for solving a specific problem in ANP that it performs poorly when the available training data size is small. And from our ablation study in Table 3, you can easily observe that the ATR (adversarial trigger recovery) module is the main driving force for the improved performances, not the soft mask, or the sparse regularization. Essentially, we estimate the worst-case perturbation $\\Delta$ to the input data for backdoor removal while ANP perturbs the neurons for finding the sensitive ones. In fact, till now it is hard to understand why there exists such sensitive neurons that directly corresponds to the backdoor (or essentially ANP works aside from the experimental results). Yet our trigger recovery strategy is actually very intuitive and easy to understand: we ask the neural network to adjust the weight masks to make sure that even though the worst-case trigger is applied, the prediction will not be modified, which shared some essence with the adversarial training method in adversarial robustness study. But different from adversarial training, which typically does not work when the finetuing dataset is small, we only adjust the weight masks instead of model parameters itself to make sure that our method works even under one-shot setting. Therefore, we believe that our method opens up new directions for backdoor removal with very limited original data and sheds light on various other tasks that aims to perform fine-tuning with the one-shot setting.\n\nRegarding your question on how to choose the hyperparameters (especially $\\tau$) under other dataset/architectures. First, we want to argue that the hyperparameter tuning is a common problem for all current backdoor removal algorithms, not just ours. ANP also need to set the $\\epsilon$ values just like $\\tau$ in our case. Second, we believe that choosing $\\tau$ in our case is actually easier than setting $\\epsilon$ in ANP. It is hard to reason how $\\epsilon$ (neuron sensitivity) would change when the training data dimension goes from 32x32x3 to 224x224x3. Yet for $\\tau$, we can actually decide its value based on the actual image dimension with a fixed ratio. For example, $\\tau=1000$ in the 32x32x3 images corresponds to $\\tau=49000$ in the 224x224x3 image, which corresponds to fully change the value of a 128x128 region. This actually makes our hyperparameters more interpretable compared with the existing works. Also, please note that in our ablation experiments, we actually have a very wide range of $\\tau$ that satisfies our goal of successful backdoor removal, which makes the hyperparameter tuning a less annoying problem. \n\nIn summary, we believe that our work has significantly improved upon previous backdoor removal works especially in the one-shot setting and can also inspire many related works on performing fine-tuning with the one-shot setting. We also believe that the hyperparameter tuning (especially $\\tau$) is not really a problem and actually our hyperparameters are more interpretable than the existing works. We hope this further address your remaining concerns and we will be really appreciated if you can further consider our response here in the discussion phase.\n",
" I would like to thank the authors for the detailed experiments, and the thoughtful comments, which make me clearer about the method. \n\nBased on the experimental results for now, I do not find obvious fault of this method. Hence, I raise my score to borderline accept. \n\nHowever, I personally think this method is only a heuristic variant of ANP, since the proposed tricks (soft mask, sparse regularization, etc.) do not bring significant insights to the backdoor community. Moreover, although the experimental results look fine on CIFAR10 with ResNet, I still have a little concern about the difficulty of choosing the hyperparameters in other cases (datasets, architectures, attacks). For example, when the number of dimensions of the input images is large, e.g., $3\\times224\\times224$, we will have $\\tau$ ranging from 0 to 150528. I think it would be hard to determine an appropriate $\\tau$, not to mention that one has to simultaneously choose $\\gamma$ and $\\alpha$. \n\nIn summary, judging from the current results, the performance of AWM is acceptable, but the contribution is considered limited. Moreover, it would be much better if the authors can provide richer study over the search space of the hyperparameters.",
" We would like to thank you for your comments. We have responded to your question and hope it could help address your concern. In addition, we are more than happy to discuss and address any further questions before the conclusion of the rebuttal period.\n",
" Thank you for your reply and questions. Yes, they are conducted on CIFAR10 with ResNet.\n\nFirst, we want to emphasize that in general, the ACC will decrease with the increase of $\\tau$ while ASR will first decrease fastly and then slightly increase again. Of course, we will also observe some oscillations, but that’s mainly due to the randomness in optimization. To confirm these trends, we additionally provide the full sensitivity study of $\\tau$ on Trojan-WM, Badnets, and $L_0$-inv attacks with $\\tau$ ranging from 10 to 3000 (considering the data dimension for CIFAR10, further increasing $\\tau$ is the same as no constraint and thus meaningless). We can observe that the results fit our expectations.\n\n**Trojan-WM**\n\n| $\\tau=$ | **No Defense** | **10** | **50** | **100** | **500** | **1000** | **1500** | **2000** | **2500** | **3000** |\n| --------- | -------- | ------ | ------ | ------- | ------- | -------- | -------- | -------- | -------- | -------- |\n| ACC | 88.51 | 86.38 | 85.93 | 85.17 | 85.56 | 84.82 | 84.27 | 84.59 | 84.14 | 83.56 |\n| ASR | 99.86 | 51.63 | 9.01 | 8.36 | 10.27 | 13.23 | 16.25 | 24.38 | 22.97 | 25.85 |\n\n\n**Badnets**\n\n| $\\tau=$ | **No Defense** | **10** | **50** | **100** | **500** | **1000** | **1500** | **2000** | **2500** | **3000** |\n| --------- | -------- | ------ | ------ | ------- | ------- | -------- | -------- | -------- | -------- | -------- |\n| ACC | 87.83 | 86.38 | 85.15 | 84.48 | 84.54 | 84.93 | 83.60 | 83.56 | 82.67 | 82.17 |\n| ASR | 97.90 | 24.52 | 11.07 | 12.31 | 13.88 | 14.17 | 13.24 | 12.11 | 13.45 | 13.39 |\n\n\n**L0-inv**\n\n| $\\tau=$ | **No Defense** | **10** | **50** | **100** | **500** | **1000** | **1500** | **2000** | **2500** | **3000** |\n| --------- | -------- | ------ | ------ | ------- | ------- | -------- | -------- | -------- | -------- | -------- |\n| ACC | 88.23 | 85.81 | 85.71 | 85.60 | 85.39 | 85.76 | 85.18 | 85.54 | 85.47 | 84.34 |\n| ASR | 100.0 | 25.97 | 10.13 | 11.06 | 11.52 | 10.26 | 13.69 | 12.77 | 13.85 | 16.29 |\n\n\n\nThe reasons behind this are also easy to understand. In terms of ACC, since we are using a larger $\\tau$, it means that the model is modified (through the masks) to adapt to triggered data points that are further away from the original data distribution. Thus in general, it will hurt the ACC on the original data distribution. In terms of ASR, in an ideal condition, it should be decreasing since the solution space of the optimization problem with larger $\\tau$ actually contains the solution of the same problem with a smaller $\\tau$. Thus in terms of backdoor removal, it should be at least as good as that of a smaller $\\tau$. However, in practice, since we use PGD for solving the inner maximization problem, setting a large $\\tau$ will simply make the $L_1$ norm of the recovered trigger very large (similar to adversarial attacks, PGD will make the solution close to the norm ball boundary). Thus the recovered trigger will be further away from the actual trigger and affects the backdoor removal performances. \n\nIn summary, the ACC will gradually decrease while the ASR will decrease fast at first and then slowly increase. Fortunately, such trends of ACC and ASR actually do not make the selection of $\\tau$ too hard. On one hand, as you mentioned, if we pick the lowest acceptable ACC, even in the worst case (with $\\tau=3000$) the ASR performance is still acceptable. On the other hand, we argue that selecting a moderate $\\tau$ (such as 1000) is a reasonable choice as it can improve upon the worst-case ASR and ACC with little additional risks. Although we do not have prior knowledge of the possible trigger pattern, it is still reasonable to assume that the adversary will tend to make a trigger invisible and small in the norm, otherwise, it would be easy for the human eyes to detect and hard to maintain the clean test accuracy. For example, the $L_1$ norm of Trojan-WM’s actual trigger is around 115 (and the triggers from other attacks also have similar norms), which is way smaller than our choice of $\\tau=1000$. In fact, $\\tau=1000$ roughly means that we allow the trigger to significantly change over 16x16 pixels in a 32x32 CIFAR image which is already fairly large. \n\nWe hope the additional results and explanations address your concern.\n",
" I would like to thank the authors for the detailed response and experiments. I guess the additional experiments are conducted on ResNet & CIFAR10. Now I have a question:\n\nIn backdoor defense, we usually want to choose a good hyperparameter based on a budget on the drop of ACC. This works fine when the selection of hyperparameters monotonically reduces ACC and ASR at the same time. For example, $\\alpha$ in the above experiments. Decreasing $\\alpha$ can reduce both ACC and ASR almost monotonically, then we can choose the smallest $\\alpha$ that reach the lowest acceptable ACC. Under this choice, ASR is almost the lowest we can reach within the given budget. There are other typical examples like the learning rate in fine-tuning defense, threshold hyperparameters in ANP etc. Hence, the results on $\\alpha$ and $\\gamma$ is fine to me.\n\nHowever, I notice that the increase of $\\tau$ will reduce the ACC and ASR first, but after some points, the ACC and ASR will increase again ($\\tau$=100 for ASR and $\\tau$=1500 for ACC). That means it may be hard choose the best $\\tau$ within the given budget (the _best_ here means the lowest ASR). For instance, choosing $\\tau=1500$ leads to the lowest ACC, but the ASR is not the lowest. I think this property may bring difficulty in tuning the hyperparameters. Although the highest ASR is still acceptable in the given results, it doesn't ensure we can be such lucky under some other cases (other datasets, attacks, architectures etc.). Because we may choose a hyperparameter that have the lowest ACC but very high ASR. Hence, I'm interested in whether the ASR will keep increasing after 2000. This could be very important.\n\n",
" Thanks for your reply and suggestions on the two types of evaluations.\n\nFirst, we want to assure you that we used the same hyper-parameters for most of the attacks and our hyperparameters have a wide adaptability to different types of attacks. We totally agree with you that as defenders, we usually do not have much prior knowledge about the attack. \n\nLet us revisit the three hyperparameters in our method: $\\alpha$ controls the trade-off between the clean accuracy and backdoor removal effect. $\\gamma$ controls the sparsity of weight masks. $\\tau$ controls the maximum $L_1$ norm of the recovered trigger pattern. In our experiments, we keep $\\alpha$ and $\\gamma$ to be 0.9 and $10^{-7}$ across all the attacks in our experiments. We slightly vary the choice of $\\tau$. The best value of $\\tau$ is indeed related to the attack method but only to a limited degree. A default value of $\\tau=1000$ can also provide similar results. We previously set $\\tau=2000$ for Badnets, $\\tau=100$ for Trojan-WM, and $\\tau=1000$ for other attacks (including a2a, blend, clb, and wanet). To showcase that this choice does not have a huge effect, we also re-do the Badnets and Trojan-WM experiments with the default value of $\\tau=1000$. The following table shows that the performance gap is actually quite small\n\n| | Badnets | Trojan |\n| -------- | ----- | ----- | \n| Fine-tuned | 83.56(12.11) | 85.17(8.36) | \n| Default | 84.93(14.17) | 84.82(13.23) |\n\n\nRegarding your question on the sensitivity of hyperparameters for other attacks, we additionally give the result of the sensitivity study under the Trojan-WM attack (due to time constraints, we are unable to finish the study for all attacks). As can be seen from the following tables, the performance variation trend with regard to the three hyper-parameters is similar and robust across attacks. \n\n\n\n| $\\alpha=$ | 0.50 | 0.55 | 0.60 | 0.65 | 0.70 | 0.75 | 0.80 | 0.85 | 0.90 | 0.95 |\n| --------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\n| ACC | 69.89 | 72.36 | 77.82 | 81.43 | 80.72 | 82.48 | 83.43 | 83.37 | 84.82 | 85.19 |\n| ASR | 6.18 | 7.80 | 10.28 | 11.85 | 10.27 | 9.78 | 10.91 | 13.27 | 13.23 | 18.62 |\n\n\n| $\\gamma=$ | $10^{-8}$ | $10^{-7}$ | $10^{-6}$ | $10^{-5}$ |\n| --------- | --------- | --------- | --------- | --------- |\n| ACC | 85.39 | 84.82 | 84.36 | 74.34 |\n| ASR | 9.39 | 13.23 | 10.77 | 10.13 |\n\n| $\\tau=$ | No Defense | 100 | 500 | 1000 | 1500 | 2000 |\n| ------- | ----- | ----- | ----- | ----- | ----- | ----- |\n| ACC | 88.51 | 85.17 | 85.56 | 84.82 | 84.27 | 84.59 |\n| ASR | 99.86 | 8.36 | 10.27 | 13.23 | 16.25 | 24.38 |\n\nIn conclusion, our method is able to deal with different attacks with a default set of hyperparameters and those hyperparameters are indeed robust under various attacks. We hope this address your concerns!\n\n\n\n",
" Thanks for the authors' effort on the experiments. The results on small networks are acceptable to me. The performance of ANP and AWM on WaNet (with dynamic and global trigger) are much closer, which is as expected.\n\nHere I have another question. Since there are three hyperparameter that need to be tuned, I wonder if the authors use the same setting of hyperparameters against all attacks? Because in practice, we do not know which attacks will be used. It is important that we can use the same setting of hyperparameters that can defense most of the attacks. The results will be of little significance if one should carefully adjust the hyperparameters for each attack. As far as I know, ANP can achieve better results if their hyperparameters are very carefully tuned. \n\nI'm glad to see that the authors study the sensitivity of the hyperparameter in section 5.3, but the results are shown only on one type of attack. It is unclear whether the hyperparameters still robust under other attacks. According to my experience, the sensitive of hyperparameters can be very different under different attacks.\n\n In summary, there are two facts about the hyperparameters that should be considered, one is the robustness of the choice under each attack, which have been studied by the authors under only _one_ type of attack. Another one is the robustness of one setting against all the attacks. The latter one is more important to me.\n\nI will raise my score if my concern is addressed.",
" We are glad that our responses have addressed your concerns. Thank you for raising the score.",
" My concerns are solved. I raised my score to weak accept. Thank you!",
" Thanks for your suggestions on additional experiments.\n\nQ1: Evaluate ANP with more small networks to valid its usability.\n\nA1: We additionally test and compare AWM with ANP on ShuffleNet[A] and MobileNet[C]. We summarize the ACC(ASR) on CIFAR10 in the following tables. The results show that our method still outperforms significantly better with limited resources. The VGG we adopted to illustrate \"a small network\" has fewer layers than ShuffleNet and MobileNet, while ShuffleNet and MobileNet have fewer parameters. To sum up, AWM works well on various structures of small neural networks. Note that we only report results on the one-shot unlearning case for illustration. With more training resources, all methods will perform better as shown in Table 1 in the paper and Table 3 in the appendix.\n\n| **ShuffleNetV2** | **Blend** | **CLB** | **WaNet** | **WaNet(a2a)** |\n| ----------------- | ------------ | ------------ | ------------ | -------------- |\n| Original ACC(ASR) | 84.37(99.93) | 83.41(99.78) | 89.85(99.22) | 89.57(84.25) |\n| ANP | 44.15(32.51) | 62.47(6.53) | 64.39(4.77) | 72.58(10.03) |\n| AWM | 69.75(2.69) | 70.33(2.19) | 76.35(7.49) | 75.81(9.73) |\n\n\n\n| **MobileNetV2** | **Blend** | **CLB** | **WaNet** | **WaNet(a2a)** |\n| ----------------- | ------------ | ------------ | ------------ | -------------- |\n| Original ACC(ASR) | 88.83(99.78) | 87.70(100.0) | 93.78(91.01) | 94.08(92.72) |\n| ANP | 51.95(85.33) | 57.65(22.09) | 75.31(18.97) | 79.28(10.33) |\n| AWM | 67.87(2.10) | 66.68(9.70) | 74.29(8.92) | 80.97(13.38) |\n\nQ2: ANP’s reliability on universal and sparse triggers.\n\nA2: We use the ATR and SR as components in our objective function, but we do not limit the triggers as universal and sparse. Our method can also work under other kinds of attacks. The additional results on Blended attack [E] (Blend), Clean-label attack [D] (CLB), and WaNet [36] also support the reasoning. Blend uses gaussian noise (poison rate = 0.01) that covers the whole image. We used CLB with adversarial perturbations and triggers on four corners. WaNet warps the image with a determined warping field as poisoned data, so there is not a fixed trigger pattern. Since it uses a noisy warping field to generate noisy data (with true labels), it is difficult to train a backdoored model with a poison rate of 0.01. We use poison rate = 0.10 and noise rate = 0.20. These three attacks cover scenarios that triggers are dynamic and natural. Our performance verifies that AWM does not rely on universal and sparse triggers.\n\nQ3: The availability of ANP under all-to-all attacks.\n\nA3: The above results include additional all-to-all(a2a) attacks with WaNet. We have presented results on the all-to-all attack with the gtsrb dataset in the supplementary materials in the original submission (see Appendix Section B Table 3). In conclusion, AWM performs well on both all-to-one and all-to-all attacks.\n",
" Thanks for your question!\n\nQ1: Possible design of adaptive attack methods if the adversary knows the optimization procedure.\n\nA1: This is an interesting question. To the best of our knowledge, we have not seen adaptive attacks studying the case with prior knowledge of the backdoor removal procedure. Some principles in developing backdoor attacks, such as making triggers invisible or natural, are data-driven and do not direct a specific backdoor removal optimization procedure. Potentially, the corresponding adaptive attack can be hard on the device since the current backdoor removal techniques including ours, have already involved complicated bi-level optimization. \n\nInstead, we design a simple method targeting our backdoor trigger reconstruction mechanism: we only estimate one universal trigger $\\Delta$ in every iteration. Now suppose the attacker knows our design and decides to inject multiple backdoor patterns (e.g., 3 trigger patterns) into the model. They would hope our design could only remove one of them and thus fail on the other triggers. \n\nThe following table summarizes the result of defending against multiple backdoor triggers on CIFAR10. We first train a multiple backdoored model using three different types of triggers: Badnets trigger, Trojan-WM trigger, and $L_2$-inv trigger. The overall poison rate is set as $5\\\\%$. Then we apply our AWM to remove the backdoors. ASR (all) represents the attack success rate as long as any one of three triggers fooled the model. ASR1, ASR2, and ASR3 represent the attack success rate of each of the three triggers. ACC is the test accuracy on the clean test set. The results show that adding multiple triggers still cannot penetrate our AWM defense even with limited resources.\n\n| ShuffleNet | ASR (all) | ASR1 | ASR2 | ASR3 | ACC |\n| -------- | ----- | ----- | ----- | ----- | ----- |\n| Original | 99.83 | 95.80 | 99.52 | 99.15 | 84.86 |\n| AWM(500 images) | 10.38 | 9.54 | 8.19 | 13.16 | 77.02 |\n| AWM(one-shot) | 8.47 | 13.54 | 10.34 | 16.21 | 71.83 |\n\nWe conjecture that the reason for this result is that although multiple triggers are involved, our algorithm will still try to identify the most likely triggers in each iteration. Thus when the first (the most prominent) ‘trigger’ is removed, the algorithm will automatically target the next likely ‘trigger’.\n\nWe believe that such backdoor removal strategies can be hard to penetrate. Successful attacks may need to leverage tri-level optimization problems, which are notoriously hard to solve, or aim to make the removal strategy impractical by lowering its natural accuracy, which currently has no concrete solutions. We will leave this problem as one of our future work directions.\n\n\n\n\n",
" Thank you for your comments!\n\nQ1: The difference between ANP and AWM. The novelty of the algorithm.\n\nA1: We respectively disagree with your opinion on the novelty of our work. Please note that the mentioned binary/soft masking, as well as the sparsify on $m$, are just minor tricks used in our method, not our major contributions. The formulations of ANP and AWM have two other major differences: (1) we put masks on the model weights while ANP focuses on masking the neurons (as shown in Section 3, this gives us advantages in feasibly saving benign information especially when the network is small); (2) we estimate the worst-case perturbation $\\Delta$ to the input data for backdoor removal while ANP perturbs the neurons for finding the sensitive ones (which represents two totally different strategies for backdoor removal and explains why they adopt binary mask while we adopt soft mask). The above two steps, noted as SWM and ATR, both contribute to the performance improvements as can be seen in Table 3. Although the sparsity constraints on the weight masks do not bring significant improvements, we additionally show and discuss the impacts of different forms of sparsity constraints in Table 4. Nevertheless, we will cite and discuss the two sparsity regularization strategies (update and cite in the paper revision)\n\nQ2: Explain the performance gap with the original IBAU paper.\n\nA2: We are sorry for the confusion here. Please note that in our paper, we consider a different poison rate from the original IBAU paper, which leads to the performance difference. In Table 5 of IBAU, the poison rate is set as 0.20, while we follow ANP and adopt a poison rate of 0.05 in our paper. We believe that the poison rate of 0.20 is too large and impractical in real-world scenarios and a lower poison rate will further demonstrate the power of the proposed backdoor removal method.\n\nQ3: The reason why we need the one-shot unlearning setting.\n\nA3: We do agree that the one-shot case is an extreme setting. In practice, we may have a small amount of training data available. Yet we want to argue that our method not only works much better under the one-shot case but also works better or at least comparatively well in other cases. As can be seen from Table 1 in the paper and Table 3 in the appendix, in scenarios with a few hundred clean data, AWM still largely outperforms other backdoor removal baselines. Moreover, in some cases, the defender may have no clue what is the training data at all. Under such settings, it would be wonderful if the model owner could also remove the backdoors in a data-free way, which is our future aim. Studying the backdoor defense methods under the one-shot case brings us closer to that goal.\n",
" This paper improves upon the previous backdoor defense method ANP, by replacing the binary neuron pruning in ANP with a more flexible soft weight masking method. Experimental results show performance improvements over ANP. Strengths:\n1. All three technical improvements made over ANP are intuitively correct. And the results validate their effectiveness. \n2. The paper is well-written and easy to follow. \n\nWeakness:\n1. The technique novelty is limited. \nThe proposed method is a minor modification over ANP. The main difference is that it replaces hard-threshold binary masking in ANP with a more flexible soft weight masking/reweighting. Comparing Eq 3.2 (ie ANP) with Eq 4.2 (ie the preliminary proposed method), we can see the difference is only in the range of the mask: ANP require the mask to be binary while AWM (the proposed method) allows it to be continuous values in range [0,1]. Although AWM further used two more modification (sparsity on recovered backdoor pattern \\delta and sparsity on the mask m), ablation study results in Table 3 show they don't bring considerable improvements: the gain is mainly from the use of softmasks. I would say the proposed method works as people would expect, but only as a minor modification over ANP. Also, the sparsity regularization on recovered backdoor pattern (Eq 4.2) is also hardly a novel contribution. It has been used in Neural Cleanse [1] and many follow-up works such as [2]. \n\n[1] Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks. S&P, 2019. \n[2] Better Trigger Inversion Optimization in Backdoor Scanning. CVPR, 2022.\n\n2. The reported results on the baseline method IBAU is too low.\nIn table 1, the authors reported IBAU has over 30% attack success rate (ASR) against Trojan-WM when 100 clean samples are available. However, in the original IBAU paper, in their Table 5, under the same setting, the ASR is as low as 4%. Can you please explain why there is such large gap, or point out what difference in the setting I have overlooked? Thanks!\n\n3. This paper highlights its performance under one-shot setting. It is indeed advantage if the backdoor defense method requires less clean samples to fix the poisoned model. However, why do we need to peruse the extreme case of one-shot (ie only one clean sample is available for defense)? I think it practical for the defender to have a small amount (eg some hundreds) of clean data. Please see above. Please see above",
" This paper tries to improve the recent proposal of ANP in the small training data range, by using an \"optimization\" based procedure to find a soft-mask, in order to remove the backdoor trigger pattern.\n\nThe approach is well motivated, and well formulated (4.1--4.4)\n\nExperiments show the effectiveness of the method, especially in the one-shot case Strengths: Well motivated approach. The presentation is very clear and strong experiment results are shown\n\nWeakness: I'm not sure whether this has been studied, but if the adversary is aware of the optimization procedure for removing the backdoor pattern, would he be able to leverage that to counter the defense? Are there possibilities for adaptive attacks if the adversary knows the optimization procedure for removing the backdoor pattern? Is this a sensible question to ask? No limitation identified",
" This paper try to address two main drawbacks of adversarial neuron pruning (ANP), that is, ANP 1) largely depends on the size of the training set and 2) performs badly when it fails to identify sensitive neurons. On this basis, the authors propose three strategies, namely soft weight masking (SWM), adversarial trigger recovery (ATR) and sparsity regularization (SR). The proposed methods outperform ANP especially when the available dataset is small and the network size is small. Strengths\n\na. The paper is clearly written and easy to follow, and the motivation is reasonable.\n\nb. The proposed methods outperform ANP in limited resources cases, especially in one-shot setting.\n\nWeaknesses:\n\na. The evaluation is insufficient. \n\nb. Although AWM seems better than ANP, the generalization to different attacks is considered not as good as ANP. This is because the authors assume the trigger is universal (ATR) and sparse (SR), which do not cover dynamic triggers [36, 37] and triggers that are spread over the whole image [33].\n a. As the authors claim that the proposed methods help when the network size is small, they should test their methods on more typical lightweight architectures such as ShuffleNet [A], SqueezeNet [B] and MobileNet [C]. The results only on VGG is unconvincing.\n\nb. The defense effect against stronger attacks (e.g., clean label attack [D], blended attack [E]) and under various poisoning rate (especially 1%) should be tested. \n\nc. I wonder if the proposed defense work against all-to-all attacks in one-shot setting?\n\n[A] Zhang, X., Zhou, X., Lin, M. and Sun, J., 2018. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6848-6856).\n\n[B] Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J. and Keutzer, K., 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360.\n\n[C] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M. and Adam, H., MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.\n\n[D] Turner, A., Tsipras, D. and Madry, A., 2019. Label-Consistent Backdoor Attacks.\n\n[E] Chen, X., Liu, C., Li, B., Lu, K. and Song, D., Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. No. One of the limitations of the proposed methods is that it assumes the trigger is universal and sparse, which do not cover dynamic triggers [36, 37] and triggers that are spread over the whole image [33]."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"IWtMfuQD1H",
"1YTTLZLjr8",
"oSRjdWiZdcp",
"cz9mF2Kk2nB",
"Yx-2LrJE01",
"HnHkHlzSSpk",
"RUGmjYqQcLA",
"9eYgULnge3A",
"XxYEKfzDGim",
"akJw83dFs2h",
"oSRjdWiZdcp",
"HKDd_k3hzl",
"nips_2022_Yb3dRKY170h",
"nips_2022_Yb3dRKY170h",
"nips_2022_Yb3dRKY170h"
] |
nips_2022_sr0289wAUa | ASPiRe: Adaptive Skill Priors for Reinforcement Learning | We introduce ASPiRe (Adaptive Skill Prior for RL), a new approach that leverages prior experience to accelerate reinforcement learning. Unlike existing methods that learn a single skill prior from a large and diverse dataset, our framework learns a library of different distinction skill priors (i.e., behavior priors) from a collection of specialized datasets, and learns how to combine them to solve a new task. This formulation allows the algorithm to acquire a set of specialized skill priors that are more reusable for downstream tasks; however, it also brings up additional challenges of how to effectively combine these unstructured sets of skill priors to form a new prior for new tasks. Specifically, it requires the agent not only to identify which skill prior(s) to use but also how to combine them (either sequentially or concurrently) to form a new prior. To achieve this goal, ASPiRe includes Adaptive Weight Module (AWM) that learns to infer an adaptive weight assignment between different skill priors and uses them to guide policy learning for downstream tasks via weighted Kullback-Leibler divergences. Our experiments demonstrate that ASPiRe can significantly accelerate the learning of new downstream tasks in the presence of multiple priors and show improvement on competitive baselines. | Accept | This paper proposes a method to adaptively combine and use skill priors for reinforcement learning. The setting is very practical and the method proposed is novel and effective empirically. The main concerns from the reviewers were around (1) the experimental setup being too narrow and simple, and (2) the clarity of the paper. The authors added in an additional robotics experiment which partly addressed (1) and provided clarifications in their response to address (2). Overall, this seems like a solid contribution idea-wise that will inspire more work in this direction. I highly encourage the authors to revise the paper in terms of clarity (based on their rebuttal responses) as well as to consider adding a comparison to PARROT (as suggested by reviewer iPXJ). | train | [
"esV2X2yd-e",
"86eJhUNUK7S",
"2hTUm6goM6x",
"mUW_V_4cW28",
"R_73ECsZbqq",
"MOm8x22BtbC",
"Y65-82lR-BZ",
"YXfzg__qMT",
"1wqqX33Cgxi",
"qQz12S29VB",
"2hnY116bYeo",
"N1VDdVwgmbv",
"bdp2GEZgkXZ",
"idQrqcfDRSI"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. Result from robotic manipulation task: Thanks for adding this experiment! It definitely adds to the overall soundness of the paper. I strongly encourage the authors to finish some baseline evaluations and add its corresponding plot to Figure 2. \n\n2. Thanks for the clarification! This makes sense.\n\n3. I appreciate the authors' discussion on hierarchical RL and the coverage of skill primitives! Please rest assure that this limitation is not where I base my evaluation because these are open challenges for the entire community.",
" We’d like to reach out again to check if there were any additional questions or concerns about our rebuttal that we can address before the reviewer-author discussion period ends on August 9. Thanks again for taking the time to read our work and provide helpful feedback!",
" We’d like to reach out again to check if there were any additional questions or concerns about our rebuttal that we can address before the reviewer-author discussion period ends on August 9. Thanks again for taking the time to read our work and provide helpful feedback!",
" We are glad that our response addressed most of your concerns. \n\n> 1) small-scale (simple) experiments compared to MCP and SPiRL \n\nWe agree. We construct a new robotics manipulation task in robosuite environment. The environment details, data collection and experiment results are in additional experiment section A.1,\n\n\n> 2) too many moving parts\n\nWe believe that the number of parts is not a metric to measure whether a model is good/bad or whether it is stable or not. However, to address the reviewer’s concern, we can compare the components between ASPiRe and SPiRL and analyze where the instability might come from, and conduct sensitivity analysis in our additional experiment section A.2. We also want to highlight that we run 8 seeds for all experiments in the main paper and the performance of our model is stable across seeds ( Fig. 2). \n\nThe extra components ASPiRe compared to SPiRL are a) multiple skill priors instead of one, b) skill prior generator $G(z|s,\\omega)$, c) weighting function $\\omega$ and an associated optimization procedure. AWM consists of b) and c). Our current paper writing might cause confusion that AWM is another component. We will update our final paper to avoid this confusion. \n\nFor a), to learn each primitive, we use the same procedure that SPiRL trains the single skill prior. Given sufficient training data, primitives should be as stable as the single primitive in SPiRL. \n\nTo validate component b), we present the qualitative results in different domains (Fig 3,4,7) to demonstrate that our learned weight assignment adapts to the tasks on hand. The weight assignment is highly correlated to the skill prior generator $G$, i.e., it evaluates the expected critic value of skills under the distribution generated by $G$. This suggests that the skill prior generator is robust to the different domains. \n\n\n\nTo validate component c), the optimization procedure for the weighting function depends on the critics. The critic is the common component between ASPiRe and SPiRL, and its learning is independent of the extra components in ASPiRe. Therefore, the instability might only come from estimating the expected critics value of the composite skill distributions. In this process, the sample size M might cause unstable training. Therefore, we have run the hyperparameters sensitivity analysis and found our method can work well even with M=1. The results are in additional experiment section A.2. \n",
" Thank the authors for your response. The rebuttal addresses most of my concerns. \n\nI read other reviewers' comments and I do agree with the critics on (1) small-scale (simple) experiments compared to MCP and SPiRL and (2) too many moving parts, and I do not think the authors' responses are sufficient to convince the reviewers. I recommend the authors address these comments better than the current responses.",
" We thank the reviewer for all insightful comments. We are encouraged the reviewer can recognize that our work solves an important research problem and find our work has strong performance compared to competitive baselines. \n\n\n> Did you try running the method on domains other than navigation? What did you find? Even negative results can be useful to look at here.\n\nWe construct a new robotics manipulation task in robosuite environment. The task is to control a Panda robot arm to grasp the box without colliding with the barrier in the middle of the desk. Our experiments suggest that our approach is able to learn efficiently and converge to high success rates in the robotic manipulation environment. Due to time constraints, we are unable to run all baselines. However, we present the qualitative results (See Fig. 7) to demonstrate ASPiRe’s ability to compose skills. We include the environment details, data collection and experiment results in Additional experiment section A.1. For the reviewer’s convenience, we place the Additional experiment section as the additional appendix in the main paper pdf (after the checklist). \n\n\n\n\n\n> One primary concern was training stability and sensitivity of hyperparameters.\n\nSkill encoder/decoder and all skill priors are trained offline. Those components are freezed during the online training. Given that the training dataset is relatively large, we believe they should not contribute to the instability. Therefore, we focus on the hyperparameters in the online phase. \n\nWe have run additional experiments to conduct a hyperparameters sensitivity analysis. We include the experiment details and our analysis in Additional experiment section A.2. For the reviewer’s convenience, we place the Additional experiment section as the additional appendix in the main paper pdf (after the checklist). We summarize the main finding here:\n\n\nThere are two hyperparameters that can potentially impact the online learning for the downstream task: a) The target KL divergence $\\delta$, which is used to automatically tune the temperature parameters in Eq. 4 and b) The sample size $M$ which is used to estimate the critic value in Eq. 7. We investigate the impact of the sample size $M$ on both Point Maze and Ant Maze environment and the choice of the target KL divergence $\\delta$. \n\nWe first investigate the impact of sample size M by setting $M=1,10,20$. We find that the sample size has almost no impact on the learning. We hypothesize two reasons: a) The large train batch size in our experiment helps to estimate the critic value $Q(s,\\omega)$ more stable and b) Given the input size of weight vector $\\omega$ is small, it might be easy for the neural network to approximate its value via small sample size.\n\nWe further investigate the impact of the target KL divergence $\\delta$. We find that target KL divergence significantly impacts the learning. Based on our experiments, we conclude that a) More complex tasks require higher target KL divergence. The optimal policy to solve complex tasks might be significantly different from the composite skill prior. Therefore, the policy needs more “space” to explore around the composite skill prior. b) Imposing too small target KL divergence can lead to downgraded performance. This is because the policy will be forced to stay close to the composite skill prior and itself might not be able to solve such complex tasks. c) Imposing too big target KL divergence can lead to downgraded performance. As target KL divergence increases, the learned policy will receive less guidance from the prior. Though the learning is sensitive to the choice of target KL divergence, we find that there might still be a range of KL divergences values leading to the same optimal performance. \n\n\n \n\n> Comparison to PARROT\n\nThe code for PARROT is not released. Given the time constraints, we are unable to implement PARROT and compare it with our method. The closest baseline compared with PARROT in our evaluation might be BC+Fine tune. In BC+Fine tune framework, we initialize the RL policy with the BC policy learned from offline and use it to explore the environment more efficiently. This is similar to PARROT, which initializes the RL policy with the base distribution used for training the prior. Notice, BC+Fine tune in our evaluation executes actions in the skill space, which is also similar to PARROT which executes actions on z. \n",
" We thank the reviewer for insightful and positive feedbacks. We are encouraged the reviewer found that our work is novel in the field of policy regularization literature. We are glad the reviewer recognizes our approach can even work with the presence of distractor skill priors. \n\n> The baselines and prior work present results on more challenging control tasks. \n\nWe construct a new robotics manipulation task in robosuite environment. The task is to control a Panda robot arm to grasp the box without colliding with the barrier in the middle of the desk. Our experiments suggest that our approach is able to learn efficiently and converge to high success rates in the robotic manipulation environment. Due to time constraints, we are unable to run all baselines. However, we present the qualitative results (See Fig. 7) to demonstrate ASPiRe’s ability to compose skills. We include the environment details, data collection and experiment results in Additional experiment section A.1. For reviewer’s convenience, we place the Additional experiment section as the additional appendix in the main paper pdf (after the checklist). \n\n\n\n\n\n> It would be useful to clarify what is learned offline vs online in section 3, especially 3.3.\n\nBoth weighting function $\\omega$ and skill prior generator $G$ is learned online. We have updated our paper to clarify this. \n\n> The motivation behind the skill prior generator is difficult to understand….\nThe adaptive weight module evaluates the expected critic value of skills under the composite skill prior distribution generated by weight $\\omega$. Notice that the composite skill prior distribution is generated by weighted KL divergence. We select the weights which can lead to the highest expected critic value to assign among primitives. We have updated the paper to clarify the confusion. \n\n\n\n> A baseline that composes the learned priors with a mixture of experts model (i.e., G=∑i=1Kpai(z|s)) would clarify the need for the skill prior generator.\n\nMCP is the baseline that composes the learned priors with a mixture of experts model. As we pointed out in the paper (L245), MCP composes the same set of learned priors as ASPiRe and executes the composite distribution as the policy. Instead of additive Gaussian, it uses the multiplicative Gaussian. In the original MCP paper, they show that MCP has better performance than additive Gaussian. Therefore, we believe MCP is a competitive baseline to compare with. \n\nTo further address the reviewer’s concern. We included the additional experiment that uses composite skill prior as a policy in Additional experiment section A.4. For the reviewer’s convenience, we place the Additional experiment section as the additional appendix in the main paper pdf (after the checklist). We observe that using composite skill prior as a policy results in poor performance (similar performance as MCP), which confirms the importance of using composite skill prior as prior instead of policy\n \n \n\n> Section 3.2 could make reference to SVGD [1] and SVPG [2]. SVPG has a similar prior regularization, but for policy learning directly as opposed to skill primitives.\n\nThanks for pointing them out. We have added them into our updated paper. \n\n> Is Gμlearned offline and $ω_\\sigma$ learned online?\n\nBoth skill prior generator $G$ and weighting function $\\omega_\\sigma$ is learned online. We have updated our paper to clarify this. \n\n> What ω is Gμ conditioned on in equation 6? Should it be ωi?\n\nThe skill prior generator G in equation 6 is conditional on $\\omega$ as a whole. As G generates a composite skill prior, it needs to know the weight assigned to every skill primitive. \n\n> Are all of the baselines trained with the same number of parameters?\n\nYes. The policy network $\\pi$ and critic networks Q have the same number of parameters across all baselines. The number of parameters and architecture for skill encoder/decoder and skill priors are the same for ASPiRe and SPiRL. We used the same neural network architecture and number of parameters reported in the original SPiRL paper. \n\n> Lines 305-307 state that the push agent fails to complete the setting requiring traversing the maze... Why is the push prior so much stronger than navigation in figure 6?\n\nIn the grey phase of figure 4, it is true that push prior contributes more weight than navigation. This is because the push primitive includes the skills allowing the ant to approach the box (in a straight line). If the weights are more biased towards navigation prior, the ant will receive less guidance on how to approach the box. Intuitively, the push primitive provides the ant the guidance on which part of the maze it should explore. The navigation prior guides the ant how to explore in the maze. As the ant will only receive the reward if the box is pushed to the target, the push primitive might be the most important one among the three given primitives. Figure 6 also confirms this conclusion.\n",
" We thank for the reviewer for recognizing our approach is novel and effective. We are encouraged that the reviewer find our system can compose multiple skill priors simultaneously and effectively solve the downstream tasks. \n\nWe have updated all the citations, and updated the paper writing to improve clarity.\n\n> In L28-32, the claim about the single skill prior not being able to handle composition is not very intuitive. Also, Figure 1 does not seem relevant…it should be more explicilty described\n\nThe agent needs to compose navigation skills and avoid skills concurrently when the obstacle is observed (Fig 1) and only activate navigation skills when it is not.\n\nWe have updated our paper to describe our motivation to prefer multiple skill priors to compose skills. We have also updated the figure 1 caption to describe the scenario in figure 1 explicitly is about the simultaneous execution of two skills. \n\n> \"the learned policy\" sounds weird in some contexts, e.g., L43, L46.\n\nWe have updated the paper. Thank you for pointing this out. \n\n\n> It might be good to show the performance of the uniform weights…., this simple baseline may work as well.\n\nWe have tested the uniform weight in the Ant Maze domain. Specifically, we experimented with two different settings: a) all skill priors are relevant to the downstream tasks b) not all skill priors are relevant to the downstream tasks. We have put the additional experiment results in Additional experiment section A.4. For the reviewer’s convenience, we place the Additional experiment section as the additional appendix in the main paper pdf (after the checklist). Here, we briefly summarize the result. \n\nThe reviewer’s hypothesis is correct in setting a) when all primitives are relevant. We observe that uniform weights also deliver good performance. If we compare it with many other random weights assignments, uniform weights achieve a near-optimal assignment. Regardless, our algorithm is still able to achieve the best performance among all different weight assignments, showing that the system has enough flexibility to learn optimal policy. \n\nHowever, in the setting b) not all skill priors are relevant to the downstream tasks. \nOur experiment shows that the adaptive weight assignment significantly outperforms the uniform weight. This suggests that our adaptive weight assignment is robust to the various downstream tasks. \n\n\n\n\n> Is the MCP baseline identical to using the proposed composite skill prior as a policy? Otherwise, it would be important to include this baseline.\n\nThey are very similar but slightly different. Both of them try to sample the action from composite distribution directly, but the difference is how they composite the distribution. One uses multiplicative gaussian (MCP), and the other uses weighted KL (ours).\n\nWe thank for the suggestions and agree it’s an important comparison to include in the paper. We included the additional experiment that uses “proposed composite skill prior as a policy” in Additional experiment section A.4. We observe that using composite skill prior as a policy results in poor performance (similar performance as MCP), which confirms the importance of using composite skill prior as prior instead of policy\n\n> Most papers are cited as arXiv papers, which are conference or journal papers. Please use appropriate citation.\n\nThanks for pointing it out. We have updated the citation with conference/journal format. \n\n> There are many typos (including notations) and grammatic errors. The writing generally needs some improvement.\n\nGiven the time constraints and the number of experiments we need to run, we are only able to fix a limited number of typos. We will carefully go through our paper before we submit our final version. \n",
" We first thank the reviewer for the thoughtful and positive feedback. We are encouraged that the reviewer can find the adaptive skill prior to be a contribution to the existing hierarchical Reinforcement Learning literature. \n\n> The experiment environments are from a similar domain… Thus, the conclusion drawn seems a bit weak.\n\nWe construct a new robotics manipulation task in robosuite environment. The task is to control a Panda robot arm to grasp the box without colliding with the barrier in the middle of the desk. Our experiments suggest that our approach is able to learn efficiently and converge to high success rates in the robotic manipulation environment. Due to time constraints, we are unable to run all baselines on this new task. However, we present the qualitative results (Fig. 7) to demonstrate ASPiRe’s ability to compose skills. We include the environment details, data collection and experiment results in Additional experiment section A.1. For the reviewer’s convenience, we place the Additional experiment section as the additional appendix in the main paper pdf (after the checklist). \n\n\n> As talked about in appendix A.2.1 … Would this make the skill latent space z more tailored towards the first task?\n\n\nIt is incorrect. As discussed in the main paper, section 3.1, the skill encoder and decoder are learned from an aggregated dataset by randomly sampling the state-action tuple from each primitive dataset. The sampling process is uniform among each primitive such that the latent space will not tailor toward any primitive. Later, all primitive skill priors will operate on this shared skill encoder and decoder. Therefore the skill latent space won’t be tailored towards any task in particular. We have added this detail to the appendix to reduce the confusion. \n\n\n> A major limitation of hierarchical learning with fixed primitives is that the task could be impossible to complete if the primitives are not comprehensive enough. \n\nWe don’t agree with the assessment. Similar to other prior works (e.g., SPiRL), the policy for the downstream task could benefit from using relevant skill prior (primitives), but not limited to them. For example, if we impose a higher target KL divergence in equation (4). The algorithm will behave like a typical RL algorithm and flexibly learn new skills that are deviating from the skill priors.\n",
" We thank the reviewers for their thoughtful feedback. We are encouraged they found our work to be novel (R1, R2, R3), well explained (R1,R2), and our evaluation intuitive (R1,) and thorough (R3). We are pleased they found our approach archives significant improvement (R2, R3, R4) against appropriate baselines (R3, R4). We are glad R4 recognizes that our work tackles an important problem and R3 recognizes that our work enables datasets to be leveraged for a broader set of target control tasks. \n\nOne primary concern was the lack of evaluation in the more complex domain (R1,R2,R4), e.g., robotic manipulation. We construct a new robotics manipulation task in robosuite environment. The task is to control a Panda robot arm to grasp the box without colliding with the barrier in the middle of the desk. Our experiments suggest that our approach is able to learn efficiently and converge to high success rates in the robotic manipulation environment. We include the environment details, data collection and experiment results in additional experiment section A.1, Figure 7 and 8.\n\nThe other concern was around training stability (R4). We conduct a hyperparameters sensitivity analysis to address the concern. Our experiment shows that our approach is stable with a reasonable choice of parameters. We further conclude the logic to select the parameters and the cause of downgraded performance with an inappropriate choice of parameters. We include the experiment details and our analysis in Additional experiment section A.2 Figure 9 and 10. \n\nR2 and R3 suggest to compare with directly executing composite skill prior as a policy. We conduct and inlcude the comparative experiment in paper additional experiment section. We observe using composite skill prior as a policy resulting poor performance (similar performance as MCP), confirming the importance of using composite skill prior as prior instead of policy. We include the experiment details and our analysis in Additional experiment section A.3 Figure 11.\n\nR3 suggests to show the performance of the uniform weights for multiple skill priors to see the effect of the proposed adaptive weight module. We thank for the suggestions and agree it’s an important ablation study to include in the paper. We conduct the experiment and find that our system is able to achieve the best performance among all different weight assignments when all primitives are relevant. However, in the case of irrelevant primitives presented, our experiment shows that the adaptive weight assignment significantly outperforms the uniform weight. This suggests that our adaptive weight assignment is robust to the various downstream task. We include the experiment details and our analysis in Additional experiment section A.4.\n\nWe also updated our paper to reduce the confusions that the reviewers had, e.g., the training details of skill encoder/decoder (R1,R3), the reasoning for using multiple skill priors (R2) and the motivation behind AWM (R3). We further correct the citation format (R2) and add the missing citation (R3).\n\nWe address reviewer comments below in detail and will incorporate all feedback.\n",
" This paper presents a hierarchical Reinforcement Learning method, where the high-level controller chooses skills that maximize the reward while staying close to a weighted combination of all skill priors. The skills and skill priors are obtained by fitting one skill prior to each labeled dataset using a VAE. A key contribution of this work is the adaptive skill prior, which learns to decide the weighting among skill priors that leads to the highest critic value. The method is evaluated in 3 simulated environments and is shown to be advantageous in terms of sample efficiency and final model performance. Strengths:\n- The components in the proposed approach are mostly well explained, and the whole idea is sound.\n- The Adaptive Weight Module is a novel method to assign optimal weightings among priors to generate skills with high values.\n- The visualization of weights along a rollout trajectory is very intuitive.\n\nWeaknesses:\n- The experiment environments are from a similar domain. In particular, the point mass and ant differ mainly in terms of low-level dynamics, which mainly challenges the skill learning stage (not the main contribution of this work). Thus, the conclusion drawn seems a bit weak. - As talked about in appendix A.2.1, the skill encoder and decoder are essentially learned from a single primitive and expected to generalize to other primitives. Would this make the skill latent space z more tailored towards the first task? A major limitation of hierarchical learning with fixed primitives is that the task could be impossible to complete if the primitives are not comprehensive enough. Also as the authors have discussed in the paper, separate datasets of different primitives are required. It's unclear how this method will perform in harder downstream tasks that require more primitives.",
" This paper extends the prior work on skill prior-regularized RL by learning a skill prior for each skill. When K skill labels are available from the offline data, we can learn K skill polices using VAE as well as K skill priors, which tell an agent whether each skill is plausible to execute given a state. To intermingle K different skill priors into one composite skill prior for downstream RL, a weighting function of K skill priors is learned to maximize the estimated Q-value when following the composite skill prior. One benefit of the composite skill prior is that it enables simultaneous skill composition, e.g. navigating towards a goal while avoiding an obstacle. The experiments demonstrate that the proposed method can effectively combine up to three skill priors, and tackle downstream tasks by not only exectuing one skill at a time but also executing composite skills.\n ### Strengths\n\n- Figure 3 and 4 are very intuitive and well explain that how the proposed adaptive weight module learns to regulate the downstream task policy.\n\n- The proposed method of weighting different skill priors is novel and the empirical results show the effectiveness of using multiple skill priors quantitatively and qualitatively.\n\n### Weaknesses\n\n- In L28-32, the claim about the single skill prior not being able to handle composition is not very intuitive. Also, Figure 1 does not seem relevant to this context. If this composition is about a mixture of two skills (simultaneous execution of two skills), it should be more explicilty described.\n\n- \"the learned policy\" sounds weird in some contexts, e.g., L43, L46.\n\n- The assumption of having task labels for trajectories can limit the applicability of the proposed method. The major benefit of the prior work, SPiRL, was in being able to leverage unstructured data.\n\n- It might be good to show the performace of the uniform weights for multiple skill priors to see the effect of the proposed adaptive weight module. Given that only push skill prior can achieve comparable results with ASPiRe in Figure 6, this simple baseline may work as well.\n\n- Is the MCP baseline identical to using the proposed composite skill prior as a policy? Otherwise, it would be important to include this baseline. - Most papers are cited as arXiv papers, which are conference or journal papers. Please use appropriate citation.\n\n- There are many typos (including notations) and grammatic errors. The writing generally needs some improvement.\n The limitations and potential negative societal impact are well addressed in the paper.",
" The paper tackles the problem of composing skills from multiple offline datasets. Aspire learns multiple priors from different offline datasets and then learns a set of weights over the priors and a composite prior predictor during online training on new tasks that require application of multiple skills. The method outperforms BC, Multiplicative Compositional Policies (MCP), and SPiRL on a set of 3 Maze tasks that require the agent to compose pushing, avoiding, and navigating actions. - Strengths\n - Clarity: The abstract and introduction state the motivation and setup clearly. I found the method section up to 3.3 to be straightforward and concise.\n - Significance: The authors demonstrate that aspire can effectively learn composite policies from offline data of diverse and distinct skills. These results hold even in the presence of distractor skills. Work in this direction enables datasets to be leveraged for a broader set of target control tasks.\n - Quality: The evaluation is thorough: the authors test multiple relevant baselines (MCP, spirl) and show the effectiveness of the method in the presence of irrelevant skills.\n - Originality: To my knowledge, the algorithm and associated losses for a policy regularized by composed primitives is novel.\n- Weaknesses\n - Clarity: The explanation of the approach could be made more straightforward.\n - It would be useful to clarify what is learned offline vs online in section 3, especially 3.3. \n - The motivation behind the skill prior generator is difficult to understand because there is no reference to the skill prior generator in the introduction text of section 3 or in figure 2. Based on the previous sections, I expected the adaptive weight module to be a convex combination of the individually learned priors. \n - Quality:\n - A baseline that composes the learned priors with a mixture of experts model (i.e., $G = \\sum_{i=1}^Kp_a^i(z|s)$) would clarify the need for the skill prior generator.\n - Figure 6 shows that the push prior accounts for much of the success of the model and Figure 4 shows that, even for maze traversal, the push prior is contributes the most weight to the policy. This counterintuitive result is not explained.\n - Significance:\n - The baselines and prior work present results on more challenging control tasks. E.g., MCP studies tasks on biped, humanoid, and t-rex in addition to ant. SPiRL studies FrankaKitchen.\n - Originality: Section 3.2 could make reference to SVGD [1] and SVPG [2]. SVPG has a similar prior regularization, but for policy learning directly as opposed to skill primitives.\n\n[1] https://arxiv.org/abs/1608.04471\n[2] https://arxiv.org/abs/1704.02399 - Is $G_\\mu$ learned offline and $\\omega_\\sigma$ learned online?\n- What $\\omega$ is $G_\\mu$ conditioned on in equation 6? Should it be $\\omega_i$?\n- Are all of the baselines trained with the same number of parameters?\n- Lines 305-307 state that the push agent fails to complete the setting requiring traversing the maze. But in figure 4 the weight of the push prior is highest and sometimes 1 while traversing the maze. What accounts for this counterintuitive behavior? Why is the push prior so much stronger than navigation in figure 6? The authors describe that their algorithm relies on the presence of labeled skill data for offline training and depends on a set of specified skills in advance.",
" The authors present a method for 1) extracting skills from offline demonstration datasets and 2) using these learned skills for accelerated learning of downstream tasks, where the new task might often require composing two or more skills (either concurrently, or sequentially). For (1), the authors first train a VAE on action sequences from the offline dataset to learn a latent space of skills. They then extract primitive skills priors P^i_a(z|s_t) (one for each dataset D_i) by minimizing a KL divergence term (line 140 in the paper). Finally, when learning a new task, the policy that is being learned is constrained to stay close to a mixture of skill priors through a KL divergence term. The weights of this mixture (called the adaptive weight module) is learned so as to maximize the Q-values of the downstream task. \n\nThe authors evaluate their method on three simulated navigation tasks. In two of the tasks, two skill priors need to be combined for learning a new task, while there are three skill priors in one of the tasks. The authors provide comparisons against SPiRL (the method on which they closely build), learning from scratch, and naively combining BC with RL, and show that their method significantly outperforms these other techniques. Strengths\n\n- The paper tackles an important problem – how can we compose a diverse set of previously learned skills to solve new tasks quickly? \n- The paper shows strong performance when compared to the mostly closely related work - SPiRL. \n\nWeaknesses\n\n- The proposed method has a lot of moving parts: skill encoder / decoders, skill priors (one per skill), downstream policy, Q-function, skill prior generator, adaptive weight module, and so on. All of these various components are closely dependent on each other, and makes me a little concerned about the overall training stability of the method and sensitivity of hyperparameters. \n- The experiments are fairly small-scale. The main point of the paper is that it allows combining multiple skill priors to solve the tasks, but two of the experiments involve combining only two skill priors, and one set of experiments involves combining three skill priors. The experiments are also all in navigation domains, so it is unclear if the method will work on other domains, say robot manipulation or Atari games. \n\n - Since I have concerns regarding the overall training stability of the method (due to the large number of moving parts), can the authors share in detail what kind of hyperparameter sweeps had to be done? Generally speaking, a hyperparameter sensitivity analysis would be very insightful for this work. \n- Did you try running the method on domains other than navigation? What did you find? Even negative results can be useful to look at here. \n- As mentioned in the introduction, one of the relevant related work is PARROT, but there is no comparison provided to this method. Would it be possible for the authors to run this comparison? \n\nI am open to updating my rating if the authors can provide answers to these questions / make edits to the paper that address these concerns. \n I think the limitations section in the paper is very limited. The only limitation that the authors mention is the difficulty of collecting useful prior dataset, but I think the work has other limitation as well (mentioned in the weaknesses and questions section above) that would be worthwhile to discuss in more detail. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"1wqqX33Cgxi",
"MOm8x22BtbC",
"1wqqX33Cgxi",
"R_73ECsZbqq",
"YXfzg__qMT",
"idQrqcfDRSI",
"bdp2GEZgkXZ",
"N1VDdVwgmbv",
"2hnY116bYeo",
"nips_2022_sr0289wAUa",
"nips_2022_sr0289wAUa",
"nips_2022_sr0289wAUa",
"nips_2022_sr0289wAUa",
"nips_2022_sr0289wAUa"
] |
nips_2022_o4neHaKMlse | Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone | Vision-language (VL) pre-training has recently received considerable attention. However, most existing end-to-end pre-training approaches either only aim to tackle VL tasks such as image-text retrieval, visual question answering (VQA) and image captioning that test high-level understanding of images, or only target region-level understanding for tasks such as phrase grounding and object detection. We present FIBER (Fusion-In-the-Backbone-based transformER), a new VL model architecture that can seamlessly handle both these types of tasks. Instead of having dedicated transformer layers for fusion after the uni-modal backbones, FIBER pushes multimodal fusion deep into the model by inserting cross-attention into the image and text backbones to better capture multimodal interactions. In addition, unlike previous work that is either only pre-trained on image-text data or on fine-grained data with box-level annotations, we present a two-stage pre-training strategy that uses both these kinds of data efficiently: (i) coarse-grained pre-training based on image-text data; followed by (ii) fine-grained pre-training based on image-text-box data. We conduct comprehensive experiments on a wide range of VL tasks, ranging from VQA, image captioning, and retrieval, to phrase grounding, referring expression comprehension, and object detection. Using deep multimodal fusion coupled with the two-stage pre-training, FIBER provides consistent performance improvements over strong baselines across all tasks, often outperforming methods using magnitudes more data. Code is released at https://github.com/microsoft/FIBER. | Accept | This paper proposes a two-stage pretrain visual language model which can deal with both high level and region level downstream tasks. Experiments show significant improvement SotA models. Main concerns from reviews are some missing references while the author gave detailed comparisons in the responses. Although reviewer Gvbu's opinion is still somewhat conservative, I think the novelty of the paper is clear and the comparison to the SotA is sufficient. | val | [
"DSHKpcdZzU0",
"svixthLYKba",
"OYFDJ0wzhv0",
"6g7jSSHM2W",
"LJIghAo46je",
"qXPCZaZQNhR",
"_AwM-Kupuzou",
"E_5UhJ4xgxE",
"vgV7oaO9c_Q",
"hHDPO1mHSF4",
"R_b_5hiKp_T",
"E0VDYjRQvhn",
"RBYK-8SFR-7R",
"Dn1zKeOJn7h",
"pg8IJqh7y_2",
"B3VJUXG_iic",
"f5KkA3n0a1",
"e0lQlR-XgN0",
"NiFg9_DGDzO",
"XRtCzJDthe5"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the update!\n\nWe would like to kindly point out that we have evaluation results on ~30 datasets in the paper. Many of these tasks (e.g. VQAv2, COCO object detection, RefCOCO+) are highly competitive, yet we can obtain consistent performance improvements over strong baselines, often outperforming methods using magnitudes more data, which we believe is a significant achievement.",
" I appreciate the authors comprehensive response and new results. While I still find the two-stage training to have marginal improvements on some tasks, it is good to see more insights and the improvement over GLIP. Therefore, I would raise my score by 1. ",
" Dear reviewer, as the discussion period will end in 24 hours, we would love to hear if you still have any concerns and we are willing to discuss them. Thank you for your time!",
" Thank you for the update!",
" I have gone through the comments from the other reviewers and the responses from the authors, and most of my concerns have been addressed well. The proposed two-stage training strategy and the architecture are novel enough. Also, a comprehensive analysis and ablation study are included in the paper. Thus, my final recommendation for this paper is Accept.",
" Dear Reviewer, \n\nWe are truly thankful for your valuable feedback! We have tried to address all of your concerns in our responses. As the author-reviewer discussion period will end soon (until Aug. 9), we would love to hear if you still have any concerns and we are more than happy to discuss.\n",
" Dear Reviewer, \n\nWe are truly thankful for your valuable feedback! We have tried to address all of your concerns in our responses. As the author-reviewer discussion period will end soon (until Aug. 9), we would love to hear if you still have any concerns and we are more than happy to discuss.\n",
" Dear Reviewer, \n\nWe are truly thankful for your valuable feedback! We have tried to address all of your concerns in our responses. As the author-reviewer discussion period will end soon (until Aug. 9), we would love to hear if you still have any concerns and we are more than happy to discuss.\n",
" \n**Q10: It would be also good to see some discussions on the technical limitation of the proposed method.**\n\nThank you for the suggestion! While our proposed framework has been evaluated on a wide range of tasks, it has not been extended to tasks such as semantic/instance/panoptic segmentation yet. In addition, our model relies on pre-trained image and text encoders, thus it would be worth investigating if we can train the whole model from scratch. We leave these as our future work.\n\n[2] Dou, Zi-Yi, et al. \"An empirical study of training end-to-end vision-and-language transformers.\" CVPR 2022.\n\n[3] Li, Liunian Harold, et al. \"Grounded language-image pre-training.\" CVPR 2022.\n\n[4] Zhang, Haotian et al. “GLIPv2: Unifying Localization and Vision-Language Understanding.” ArXiv abs/2206.05836 (2022)\n",
" \n**Q6: When training with object-level bounding box annotations, how does FIBER convert object names into texts?**\n\nWe apologize for the lack of clarity on this matter. The object category names are directly used in their text form separated by full stops as input to the text encoder. We follow the same protocol as in GLIP [3] to be comparable to their experiments. More specifically, the input text will look like this: \"person. bicycle. car. .... toothbrush\", and the model will learn how to ground image regions into these object names. This effectively makes our FIBER model suitable to object detection tasks. An example of input and output predicted by the model can be seen in Fig. 12 of the Appendix.\n\n\n**Q7: The paper claims that the image-text fusion mechanism from FIBER is better than that of GLIP. Could the authors provide some evidence to support this claim?**\n\nWe prove this from the following perspectives.\n* In terms of performance on region-level tasks, by comparing the results of Row #1 and #2 in the table below, it is clear that the fusion mechanism of FIBER is better than that of GLIP.\n* In terms of efficiency, as summarized in Table 2 of the paper, FIBER takes 1.38 s/iteration whereas GLIP takes 2.14 s/iteration under a strictly fair comparison.\n\n\n| | Model | OD on COCO | OD on LVIS | ODinW |\n| -------- | -------- | -------- | -------- | -------- |\n| | | Zero-shot/Finetune | Zero-shot/Finetune | Zero-shot |\n| 1| GLIP-B | 48.1/57.0 | 29.1/51.0 | 44.8 |\n| 2| FIBER-B w/o C.G. VLP | 48.9/57.8 | 31.6/55.8 | 45.1 |\n| 3| FIBER-B | 49.3/58.4 | 35.8/56.9 | 47.0 |\n\nFurther, we would like to stress here on the fact that **GLIP [3] cannot be used for VL tasks that require deep fusion such as VQA and captioning.** Even compared to a newer version of the paper (GLIPv2 [4] on arxiv on 6/12/2022), we still **outperform their approach (GLIPv2-Base) on VQA by 5 points (78.46 vs 73.3)**, even though they use much more data (20M compared to our 4M), and have to train at high resolution (1333x800), even for tasks such as VQA, clearly displaying the efficacy of FIBER and the two-stage pre-training strategy that we propose. During inference, due to its use of OD to first extract region features, GLIPv2 [4] will be also much slower than FIBER on VQA and image captioning tasks.\n\n\n**Q8: FIBER can support training and inference with higher-resolution images due to the Swin Transformer. However, this also increases the training and inference time. It would be good if the authors could make a thorough comparison between Swin Transformer and ViT for vision-language tasks.** \n\nViT-based models are typically constrained to use images at low resolution, due to their quadratic complexity in terms of the size of the image. For object detection (OD) and other region-level tasks that require high-resolution inputs, this becomes a computational challenge, while the Swin transformer allows the computational complexity to remain linear in the size of the image. Furthermore, the encoded multi-scale image features from Swin also make it more suitable for OD tasks. We choose to use Swin for this reason, even though METER [2] proves that it achieves worse performance than CLIP-ViT on image-level tasks after VLP (Please see the response to Q3).\n\nMETER [2] has also compared the computational time between Swin Transformer and ViT for VL tasks (in their Table 12), showing that Swin Transformer (resolution 384x384) has a similar inference time with ViT (resolution 224x224). This means that image resolution is not the sole factor for inference time. Further, we show (Table 2 in the paper) that even at higher resolutions, we are saving in terms of training time as well as parameters compared to models using similar backbones (such as GLIP [3]).\n\n\nTo further address your concern, we have also conducted additional experiments comparing Swin and ViT in the table below, listed under Q9.\n\n**Q9: Similarly, it is unclear how much gain does RoBERTa provide over BERT.**\n\nMETER [2] has conducted an extensive survey of different backbones, and they note that RoBERTa-Base and BERT-Base perform similarly both with and without VLP.\n\n\nWe also conducted additional experiments in our settings without VLP, which has been included in our revision. \n\n| Model | Image Encoder | Text Encoder | VQAv2 test-dev |\n| -------- | -------- | -------- | -------- |\n| FIBER | Swin | RoBERTa | 71.97 |\n| FIBER | Swin | BERT | 71.86 |\n| FIBER | CLIP-ViT | RoBERTa | 71.37 |\n\nThe results confirmed the findings of METER [2] that RoBERTa and BERT perform similarly and we find that even with fusion in the backbone, Swin performs slightly better than CLIP-ViT before VLP.\n\n",
" **Q3: Comparisons to existing models using different image and text encoders as well as image resolution and pre-training data.**\n\nIn our result tables, we ensure that we report relevant prior work displaying the current best performance on the tasks while also covering a variety of architectures using different image and text encoders. However, we also make sure to compare to the most similar prior work such as METER [2] & GLIP [3] that use the same image encoder, and similar text encoder. \n\nFurther, since previous work (METER) has thoroughly analysed the effect of different text and image enoders as well as image resolutions, we build upon their work by using the suggested settings for image resolutions. Specifically, for VQA, image captioning and retrieval tasks, we follow the settings used by METER a SOTA coarse-grained model); for phrase grounding, referring expression, and OD tasks, we follow GLIP [3] (a SOTA fine-grained model), both published at CVPR 2022.\n\n\nMETER [2] has made detailed comparisons between different vision and text backbones for VL tasks in their paper (Table 2-4), demonstrating that:\n* For vision encoder, CLIP-ViT-Base (resolution: 224x224) outperforms Swin Transformer-Base (resolution: 384x384) after VLP, while underperforms Swin Transformer-Base before VLP;\n* For text encoder, RoBERTa-Base and BERT-Base perform similarly. \n\nIn their final settings,\n* METER [2] employs CLIP-ViT-Base or Swin Transformer as the vision encoder and RoBERTa as the text encoder;\n* GLIP [3] employs Swin Transformer and BERT for vision and text encoders. \n\nSince we are interested in building a unified model that can solve both coarse and fine-grained tasks, we choose to use RoBERTa as the text encoder, and Swin Transformer as the image encoder, because the encoded hierarchical multi-scale image features are more naturally suitable for object detection tasks. It is important to note that even though METER proved that the CLIP-ViT-Base is better than Swin-Base for VL tasks after VLP (see Table 8 in METER), *our improved fusion in the backbone architecture using a Swin is able to outperform METER [2] on all the image-level tasks, and also outperform GLIP on fine-grained tasks while keeping pre-training data fixed.*\n\nIn summary, we believe we have made fair comparisons with the strongest baselines in the literature.\n\n**Q4: From the phrase grounding experiments in Table 5, it seems that coarse-grained pre-training does not help much.**\n\nThough the FIBER results with and without coarse-grained pre-training do not differ much on R@5 and R@10 scores, we would like to emphasize that **for the more challenging R@1 score, FIBER can improve it from 86.5 to 87.4 with coarse-grained pre-training**, which we consider to be significant. If we look at Table 5 more closely, **our FIBER-B model even outperforms GLIP-L trained on 27M images with box level annotations** (87.4 vs. 87.1 on R@1 score), which is achieved solely due to the coarse-grained pre-training. This achievement highlights the fact that it is not necessary to pseudolabel a huge corpus (27M as in GLIP) to do well on phrase grounding tasks, but instead, our proposed coarse-to-fine grained pre-training strategy can be used as an alternative, and we believe this is a very useful result, and hence feel it is fair to claim the coarse-to-fine pre-training as a key contribution. \n\n**Q5: On the other two localization tasks (REC, OD), does coarse-grained pre-training help?**\n\nOur paper proposes a new architecture as well as the coarse-to-fine pre-training strategy. Our architectural improvements already bring us considerable gains compared to prior work on a large variety of tasks, including fine-grained tasks such as OD, REC and phrase grounding. By using the coarse-to-fine strategy, we further push the results up by a few points. On datasets such as RefCOCO+ and Flickr30k where the performance is already so close to the ceiling, we believe that achieving a few points is still meaningful as the numbers are already very saturated. We ran additional experiments to answer your question about how well FIBER does without C.G. pre-training and summarize the results in the table below. We see gains across both tasks (REC & OD) when utilizing the coarse-grained pre-training. Similar to the case of Flickr30k, **on RefCOCO+ the coarse-grained pre-training helps FIBER to get better performance than Large sized model trained with more data**. \n\n| Model | OD on COCO | OD on LVIS | ODinW | RefCOCO+|\n| -------- | -------- | -------- | -------- | -------- |\n| | Zero-shot/Finetune | Zero-shot/Finetune | Zero-shot | Val/TestA/TestB|\n| OFA-L | - | - | - | 84.49/90.10/ 77.77 |\n| GLIP-B | 48.1/57.0 | 29.1/51.0 | 44.8 | - |\n| FIBER-B w/o C.G. VLP | 48.9/57.8 | 31.6/55.8 | 45.1 | 85.04/88.82/78.59|\n| FIBER-B | 49.3/58.4 | 35.8/56.9 | 47.0 | 85.74/90.13/79.38|",
" **Q1: The insights from this paper is not clear. I find it hard to learn something from this paper that can inspire future research.**\n\nWe believe our work provides valuable insights to the community. Below, we summarize our insights from two perspectives: (1) coarse-to-fine pre-training, and (2) fusion in the backbone.\n\nFor **coarse-to-fine pre-training**, our insights are:\n* Compared with early approaches (e.g., UNITER, VinVL) that first perform supervised object detection (OD) pre-training followed by VL pre-training, we advocate a different training paradigm: first perform end-to-end VL pre-training, followed by grounded OD pre-training, so that the model can learn from large-scale image-text data and also perform language-aware OD. \n* Compared with recent end-to-end VLP methods, which either (1) only perform coarse-grained pre-training (e.g., ALBEF, METER, BLIP, VLMo) that is suitable for captioning, VQA and retrieval, or (2) only perform fine-grained pre-training (e.g., MDETR, GLIP) that is suitable for grounding and detection, here, we propose to combine them together, allowing us to design suitable strategies for tasks of different nature. \n* Specifically, **fine-grained pre-training** gives best results at high resolution of input images at the cost of increasing computations, which **can be avoided for image-level tasks such as captioning and VQA**.\n* Coarse-grained training is easier to scale up and provides **an alternative cheaper approach to scaling up fine-grained pre-training** without requiring expensive pseudo-annotation like GLIP and subsequent training on image-text-box data.\n* Fine-grained pre-training can benefit from coarse-grained pre-training when using deep multimodal fusion shared across stages.\n\nFor **fusion in the backbone**, our insights are:\n* In order to perform coarse-to-fine pre-training, we need a new architecture that can be shared across the two pre-training stages. Instead of having dedicated fusion layers after the uni-modal backbones, **we insert cross-attention modules into the image and text backbones, bringing gains in terms of memory and performance**.\n* We apply this new design to Swin-base and RoBERTa, and our detailed ablations provide insights on the best-performing methods to do so (see our response to Q2 below).\n* Using this novel architecture and pre-training strategy, we are able to obtain **SOTA performance on a wide range of VL tasks as well as on OD**. We would like to underline that this is not trivial, and there is no existing work that has performed all the tasks within one framework. \n\nLeveraging these two insights, our proposed architecture coupled with our novel two-stage pre-training pipeline achieves better performance than models using magnitudes more data on coarse-grained tasks (Table 3 & Line 263-265) and models using up to 25x more box-annotated data on fine-grained tasks (Table 5 & Line 281-286).\n\n**Q2: The paper does not have enough ablations.**\n\nBelow, we summarize the ablations present in Appendix A.2 & A.3 that we hope will provide useful insights to readers. \n* Co-attention works similarly to merged attention for fusion in the backbone.\n* *Adding a gating parameter in co-attention* allows adding fusion in more layers, and also gives better performance than merged attention. \n* Adding co-attention in the last 6 layers balances well between efficacy and efficiency.\n* *MLM, ITM with hard negative mining, and ITC* are all important pre-training objectives for training FIBER-style models. \n* Empirically, we show better performance compared to other models using the similar image and text backbones, such as METER [2] and GLIP [3], and models with other types of backbones, on various tasks.\n* Additionally, in terms of efficiency, *our fusion in the backbone model consumes half of the FLOPs needed by models using the similar image and text backbones* with late fusion such as METER [2] (line 152, 12.35 vs. 24.04 GFLOPs for one instance), and when compared in terms of training speed, FIBER takes 1.38 s/iteration whereas GLIP [3] takes 2.14 s/iteration (Table 2).\n\nFor more details, please see Appendix A.2 & A.3, where we have provided detailed ablations that guided our architecture design, including ablations on fusion strategies, pre-training objectives, architecture for captioning, and additional results on open-ended VQA, image-text retrieval with re-ranking, detailed few-shot ODinW results. Due to the space limit, these ablations and additional results were only added in the Appendix. \n\nWe hope this alleviates your concern regarding ablations to analyze what effect each technique brings. We believe that our architecture is novel, and the knowledge of how to effectively and efficiently perform multimodal fusion for such new models is a useful insight and provides a solid basis on which to build better models for future research.",
" We thank you for your valuable feedback. We are glad that you found our paper easy to follow, and that you consider our experiments on a wide variety of tasks to be sufficient to evaluate its performance. Below, we address your concerns in detail.\n",
" **Related work:** \n\nThank you for the reference! We have included these papers in the updated version, and we would just like to mention that while building VL models that jointly support image-level tasks (e.g., VQA, retrieval, and captioning) has become a standard practice in end-to-end VLP (see paragraph *VLP for Classical VL Tasks* in Sec. 2), we focus on a unified framework for both image-level tasks and region-level tasks such as phrase grounding, object detection, and referring expression comprehension.\n\n**Indicating text/vision backbones of different models:**\n\nThank you for the suggestion! Some models make special modifications to the backbones and have no clear boundaries between vision and text backbones, hence including all the vision/text backbones would have made the tables too crowded, and we decided to not add it into the main paper submission. The details of these models are provided below, which has also been included in Appendix in our revision. OD is short for Object Detection.\n\n\n| Model | Image Backbone | Text Backbone |\n| -------- | -------- | -------- |\n| UNITER | Frozen ResNet-based OD | Transformer (initialized w/ BERT) |\n| VILLA | Frozen ResNet-based OD | Transformer (initialized w/ BERT) |\n| UNIMO | Frozen ResNet-based OD | Transformer (initialized w/ RoBERTa)|\n| VinVL | Frozen ResNet-based OD | Transformer (initialized w/ BERT) |\n|ViLT | Transformer (initialized w/ ImageNet-ViT) | Transformer (top layers initialized w/ ImageNet-ViT and the embedding layer initialized with BERT) |\n| ALBEF | Transformer (initialized w/ ImageNet-ViT) | Transformer (initialized w/ BERT) |\n|VLMo | Transformer w/ MoE (initialized w/ ImageNet-ViT) | Transformer w/ MoE (initialized w/ ImageNet-ViT further pre-trained on language data) |\n| UFO | Transformer (initialized w/ ImageNet-ViT) | Transformer (initialized w/ ImageNet-ViT) | \n |ViTCAP | Transformer (initialized w/ ImageNet-ViT) | Transformer (top layers initialized w/ ImageNet-ViT and the embedding layer initialized with BERT) |\n|METER-Swin | Swin (initialized w/ ImageNet-Swin) | Transformer (initialized w/ RoBERTa)|\n| METER-CLIP | Transformer (initialized w/ CLIP-ViT) | Transformer (initialized w/ RoBERTa) |\n| MDETR | EfficientNet| Transformer (initialized w/ RoBERTa) |\n| GLIP | Swin (initialized w/ ImageNet-Swin) | Transformer (initialized w/ BERT) |\n| FIBER | Swin (initialized w/ ImageNet-Swin) | Transformer (initialized w/ RoBERTa) |\n\n\n**The effect of the number of layers to fuse cross-modal features:**\n\nThank you for pointing this out! We do have this ablation, as well as several other ablations in Appendix A.2. Due to the page limit and the number of downstream tasks that we have covered in this paper, we present the main evaluation results in the main content and put the ablations and analyses in the appendix. For your convenience, we have also included this ablation study below. \n\n| #Fusion Layers | #Fusion Params. (M) | VQAv2 | $\\Delta$ | \n| -------- | -------- | -------- | -------- |\n| 0 | 0 | 65.65 | --- |\n| 3 | 16.1 | 71.20 | +5.5 |\n| 6 | 26.0 | 71.97 | **+0.77** |\n| 9 | 35.8 | 72.10 | +0.13 |\n| 12 | 45.6 | 72.08 | -0.02| \n\nAs seen here, the performance gap when going from 3 layers to 6 layers is much larger than the smaller gaps between 6 to 9 and 12 layers. We choose to use 6 layers as a good tradeoff between performance and parameter efficiency. \n",
" \n**3. Downstream task performance**\n\na) Image-level tasks: Without using any region-level annotations, we pre-train our model only on data having image-caption pairs and then finetune this model on a variety of tasks. Even though we use less dense annotations, we are able to outperform X-VLM on most tasks, and we attribute this to our improved architecture. \n\n\n| Model | VQA2 | NLVR2 | Flickr30k IR/TR | COCO IR/TR | COCO captioning w/ Cider Optimization |\n|:----------:|:-----:|:-----:|:---------------:|:----------:|:-------------------------------------:|\n| X-VLM (4M) | 78.09 | 84.21 | 86.1/96.8 | 63.1/80.4 | 140.8 |\n| FIBER (4M) | 78.46 | 85.52 | 91.0/96.0 | 69.6/80.1 | 142.8 |\n\nb) Region-level tasks: With our second-stage pre-training bringing in fine-grained annotations, our model outperforms X-VLM on region-level tasks such as referring expression comprehension, notably by more than 2 points on the challenging testB split of RefCOCO+. While our approach can also be used seamlessly without any modifications for the task of phrase grounding on Flickr30k dataset (where each text may correspond to multiple boxes), as well as Object Detection, it is unclear how one would use X-VLM to do object detection. We report results on multiple object detection datasets, outperforming GLIP [3] which is a recent state-of-the-art model focusing mainly on detection. We believe that having a unified model that can tackle core vision tasks such as OD while also achieving state of the art on VL tasks is non-trivial. \n\n\n\n| Model | RefCOCO+ (val/testA/testB) | COCO Val2017 (ZS/Finetune) | Flickr30k Test (R@1,5,10) |\n|--|--|--|--|\n|X-VLM (16M) | 84.51/89.00/76.91 | - | - |\n|FIBER (4M+0.8M) | 85.76/90.13/79.38 | 49.3/58.4 | 87.4/96.4/97.6 |\n\n\n\n\nWe would like to conclude this comparison by noting that while very related, our approach is significantly different from X-VLM in the aforementioned ways, and we believe that our proposed fusion-in-the-backbone architecture could even complement X-VLM-like models to achieve better performance. \n\n[1] Zeng, Yan, Xinsong Zhang, and Hang Li. \"Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts.\" ICML 2022.\n\n[2] Dou, Zi-Yi, et al. \"An empirical study of training end-to-end vision-and-language transformers.\" CVPR 2022.\n\n[3] Li, Liunian Harold, et al. \"Grounded language-image pre-training.\" CVPR 2022.\n",
" Thank you for your feedback on our work, and for acknowledging FIBER to be a good architecture that provides a light-weight solution for multi-modal fusion. We apologize for missing the related work X-VLM [1], we agree that it is quite relevant to the discussion in our paper, and have included it in our revision. \n\nWe will now address your question regarding the comparison to X-VLM along three axes - motivation of the approach, technical details, and performance on downstream tasks.\n\n**1. Motivation**\n\n| X-VLM | FIBER |\n|:------------------------------------------------------------------------------------------------------------------------------ | ----- |\n| Emphasis lies in leveraging fine-grained alignment information for image-level VL tasks such as VQA and image-text retrieval, without considering tasks such as object detection. | Emphasis lies in building a unified framework for efficient VL pre-training that benefits both image-level and region-level VL tasks, while minimizing the burden for tasks that do not require region-level pre-training. More concretely, in our approach, image-level tasks, such as VQA, image captioning and retrieval, do not make use of region-level training, as we believe they can be tackled satisfactorily at the \"coarse\"-grained level after our first-stage pre-training. As mentioned in line 41-49, this division of labor allows us to design strategies suitable for different kinds of tasks. For example, region-based pre-training is beneficial for tasks such as object detection and phrase grounding, giving the best results at high resolution of input images at the cost of increased computational burden, which can be avoided for tasks such as captioning, VQA and retrieval. |\n\n**2. Technique**\n\nIn terms of methodology, X-VLM has a one-stage pre-training recipe, where it mainly incorporates object- and region-level features into input representations, and proposes to add a bounding box prediction objective, while we propose a two-stage coarse-to-fine pre-training strategy and a new fusion-in-the-backbone architecture that can potentially improve methods such as X-VLM.\n\n\n| X-VLM | FIBER|\n|:-------- |:-------- |\n|**Fusion on top of the backbones**: X-VLM requires each fusion layer to be equipped with a self-attention block, a cross-attention block and a feedforward network block. | **Fusion in the backbone**: We propose a new fusion-in-the-backbone architecture that can perform modality fusion efficiently (as discussed in Line 57-70 and Sec. 3.1). Specifically, we only need to insert a single cross-attention block into each fusion layer. We have also shown concrete comparisons of the fusion in terms of number of parameters (110M added parameters for GLIP [3] and METER [2] vs 26M in FIBER) and training time (1.38 s/iteration for FIBER vs 2.14 s/iteration for GLIP [3]) of FIBER compared to METER [2] and GLIP [3], which use a later fusion strategy than our approach. In terms of FLOPS, FIBER only consumes half of the FLOPs needed by METER [2] (12.35 vs. 24.04 GFLOPs for one instance. Even with the lighter-weight approach, we are able to outperform METER [2], GLIP [3] as well as X-VLM across a wide variety of tasks |\n| **One-stage pre-training**: X-VLM operates in a one-stage pre-training setup by using both image-level and region-level annotations simultaneously. They overcome the fact that not all datapoints have region-level annotations by ensuring half of the images in a batch contain bounding box annotations during data sampling. In cases where the coarse-grained data is scaled up, this can become a hurdle with severe data imbalance. Further, the benefits of using different resolution of inputs (especially high resolution for fine-grained tasks) is not explored. | **Two-stage pre-training**: As discussed in the paper (Line 11-15, 31-49, 71-91), we believe that separating VLP into stages can have several benefits. First, coarse-grained pre-training is easier to scale up because it is possible to crawl massive image-caption data from the Internet, while it can be costly to annotate data with bounding box information at a large scale. Second, for object detection and phrase grounding, the regions of interest can be very small, and in these cases it is favorable to train with high input resolution such as 800 x 1,333. This high resolution of input images, while ensuring better performance, makes training expensive if done at the scale of internet scraped data. This is why we propose to split pre-training into two stages to obtain the best of both worlds - ease of scaling up to obtain good performance on a variety of tasks while remaining feasible in terms of computational complexity.|\n",
" We thank the reviewers for their valuable feedback. We are encouraged that they found our work to be novel and that it provides a new architecture that is more lightweight than previous fusion modules (Reviewer SULZ). We are glad that they appreciate the performance improvements over SOTA on a wide variety of both high-level and region-level downstream VL tasks (Reviewer SULZ, Reviewer fMci, Reviewer Gvbu). We are especially pleased that (Reviewer Gvbu) found our paper to be well-written and very easy to follow while doing a good job of highlighting the architectural differences between FIBER and existing models. We address specific reviewer comments below and will incorporate all feedback into the paper. At this stage, we have added the extra details into the appendix of the revision and will reorganize it into the extra page for the camera ready. Finally, we will be releasing our code and weights to ensure reproducibility.",
" This paper proposes a new vision-language model that can deal with both VL tasks and region-level understanding tasks.\nThe proposed model, Fiber inserts cross-attention to the model to learn multimodal fusion.\nThe pretraining leverages two kinds of data: coarse-grained image-text pairs and fine-grained image-text-box data.\n Strengths:\n\t1. The model is novel. The proposed fusion in the backbone is a good architecture by inserting cross-attention layers. I like this solution which is more light-weight than previous fusion modules. The authors demonstrate this in terms of parameter size (Fiber adds 26M parameters while METER adds 110M).\n\t2. Performance is good. It outperforms METER by around 2/% on VQA and NLVR. Also, it has a good performance on region-level understanding tasks.\n\nWeaknesses:\n\t1. My biggest concern is a recent work X-VLM [1]. Could the authors distinguish the proposed method and X-VLM model? The motivation to leverage both coarse- and fine-grained features is similar. And the authors did not discuss the differences in the model architecture and performance. Especially, X-VLM is a one-stage model and I feel like the one-stage model is easier to implement and use.\n\n\n[1] X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022) Why do not compare the performance with X-VLM, which is a recent vision-language pre-trained model? N/A",
" This paper proposes to fuse visual and textual features with cross attention in the backbones for vision-language pre-training, which makes the model applicable to both high-level vision-language understanding tasks (e.g., VQA, captioning) and region-level image understanding tasks (e.g., object detection, phrase grounding). Moreover, the model is pre-trained in two-stage coarse-to-fine manner (i.e., first high-level VL tasks then region-level VL tasks). Experiments are conducted on a wide range of VL tasks to demonstrate the superiority of the proposed method. Strengths\n\n(1)\tThe proposed model is the first VLP model that can be applicable to both high-level and region-level downstream VL tasks.\n\n(2)\tA two-stage pre-training strategy is proposed to pre-train the model in coarse-to-fine manner.\n\n(3)\tSuperior performances are achieved compared with SOTA methods on a wide range of VL tasks.\n\nWeaknesses\n\n(1)\tSome related works that jointly support VQA, Retrieval, and captioning are missing:\n[A] Unified vision-language pre-training for image captioning and vqa[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(07): 13041-13049.\n[B] Scheduled sampling in vision-language pretraining with decoupled encoder-decoder network[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021, 35(10): 8518-8526.\n\n(2)\tAs shown in Table 5, it is better to indicate the type of vision/text backbone of each run in other tables (Table 3, 4, 6, 7), pursuing a fair comparison under same vision/text backbone.\n\n(3)\tThe ablation study about the effect of the number of layers to additionally fuse cross-modal features in the backbones is missing.\n Pls check the weakness section. The pre-trained model in this paper can not be directly transferred to all the considering VL tasks in zero-shot manner.",
" This paper proposes FIBER, a multimodal transformer architecture with two sets of cross-attentions: image-to-text and text-to-image. The model is pre-trained with two stages. The first stage uses image-text pairs to learn vision-language interaction with some standard pre-training objectives. The second stage uses images with bounding box annotations to learn language-grounded localization. The pre-trained model is evaluated on a diverse set of vision-language and object detection benchmarks. Strengths: The paper is well-written and very easy to follow. The paper clearly states its improvement over existing methods: an additional text-to-image cross-attention mechanism and an additional stage of fine-grained pre-training. The paper does a good job to highlight the architectural differences between FIBER and existing models. The evaluation covers a sufficient number of downstream tasks.\n\nWeaknesses: I have two major concerns about the paper as explained below.\n1. The paper puts together multiple techniques to build a model. However, most of the techniques have been proven to work well by existing literature. Therefore, the insight from this paper is not clear to me. The paper does not have enough ablation experiments to analysis what effect each technique brings, and how the proposed architectural design compares with some alternative design choices. To put it short, I find it hard to learn something from this paper that can inspire future research.\n2. The paper needs to be more careful when claiming better performance over existing models. The number of pre-training images is an important factor for pre-training performance, but not the only factor. FIBER uses Swin Transformer and RoBERTa, which are more powerful unimodal backbones than some of the existing ones. From the appendix, it also seems that FIBER uses higher-resolution images when finetuning on the downstream tasks, compared to most existing models. - The coarse-to-fine two-stage pre-training is claimed as a key contribution. However, from the phrase grounding experiments in Table 6, it seems that coarse-grained pre-training does not help much. Does coarse-grained pre-training help the other two localization tasks, i.e., REC and OD?\n\n- When training with object-level bounding box annotations, how does FIBER convert object names into texts? It is an important piece of information that need to be discussed in the paper.\n\n- The paper claims that the image-text fusion mechanism from FIBER is better than that of GLIP. Could the authors provide some evidence to support this claim?\n\n- FIBER can support training and inference with higher-resolution images due to the Swin Transformer. However, this also increases the training and inference time. It would be good if the authors could make a thorough comparison between Swin Transformer and ViT for vision-language tasks. Similarly, it is unclear how much gain does RoBERTa provide over BERT.\n\n The authors have addressed the potential social impact. It would be also good to see some discussions on the technical limitation of the proposed method."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"svixthLYKba",
"OYFDJ0wzhv0",
"XRtCzJDthe5",
"LJIghAo46je",
"Dn1zKeOJn7h",
"e0lQlR-XgN0",
"NiFg9_DGDzO",
"XRtCzJDthe5",
"hHDPO1mHSF4",
"R_b_5hiKp_T",
"E0VDYjRQvhn",
"RBYK-8SFR-7R",
"XRtCzJDthe5",
"NiFg9_DGDzO",
"B3VJUXG_iic",
"e0lQlR-XgN0",
"nips_2022_o4neHaKMlse",
"nips_2022_o4neHaKMlse",
"nips_2022_o4neHaKMlse",
"nips_2022_o4neHaKMlse"
] |
nips_2022_AcHUIG2wA8- | Non-Gaussian Tensor Programs | The Tensor Programs framework has produced a series of powerful results by 1) expressing any deep learning computation of concern as a principled composition of element-wise nonlinearities and matrix multiplication, and 2) inductively reasoning about the program behavior as the sizes of the matrices in the program tend to infinity. For example, this framework helped to show that infinitely wide neural networks exhibit Gaussian process behavior at initialization and evolve like a kernel model during training in the so-called NTK parameterization (Yang, 2019b, 2020a; Yang and Littwin, 2021). Moreover, this framework yielded a novel parameterization, coined μP (Yang and Hu, 2021), that for the first time enabled hyperparameter tuning for enormous networks too expensive to train more than once (Yang et al., 2022). However, this framework has so far been limited to Gaussian initialized weights, while uniform or truncated Gaussian distributions are more prevalent in practice. This work extends Tensor Programs to general non-Gaussian weights, thus recovering all of the above results in all practical settings. | Accept | This submission is borderline. Reviewers were generally in consensus --- all felt that the theoretical contribution is sound and non-trivial, but delivers a relatively minor addition to the Tensor Programs framework. I fully agree with this perspective, and similarly to all reviewers, recommend that the paper be accepted and the NeurIPS community will be the judge of how significant its contribution is. | train | [
"EzvqbrvFvwr",
"YEkW6alfT1n",
"5L1pMv6N3J6",
"f9XWxu3-1h7",
"m2JrcpRD8CB6",
"H2qPuFyzBvs",
"3dhIdK91_tR",
"cTRSILK1in",
"5f4YWKe24tt",
"5Aes-X7Rm5N"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I want to thank the authors for their response and clearing up some of my misunderstandings about their work. The theoretical contribution is much stronger than I initially understood and I'll update my scores/review appropriately.",
" I appreciate the authors cleaning up the notation and thank you for addressing my questions. I think the central result here is a nice addition to deep learning theory, but more of a complement to existing tensor programs theory rather an entirely novel perspective. This is a fairly niche result that will probably not affect a large audience, is valuable, nonetheless. Consequently, I keep my rating the same, \"6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.\"",
" We thank the anonymous reviewer for valuable comments!\nWe start with addressing the question:\n\n\"Why do the authors consider it useful to note that the semi-circle distribution and Marcenko-Pastur laws for non-Gaussian matrices fall out of Theorem 2? Going through tensor programs is not necessarily easier than existing proofs. What perspective/insights might deriving these results from TP have to offer?\"\n\nThis is definitely not the central result of the paper, that’s why it is in the appendix. Still, while it is not necessarily easier to obtain the semi-circle and Marchenko-Pastur distributions using the TP formalism, the point was to demonstrate that this formalism is general enough to at least re-derive these classical results. More importantly, using the same method, the TP formalism allows to prove the Free Independence Principle for non-Gaussian TPs (in particular, for non-Gaussian-initialized neural nets) - the result which is, to the best of our knowledge, novel, and could hardly be obtained with the same proof methods as for the above-mentioned classical results. We discuss this result in Appendix B; it generalizes the original Gaussian result of TP III [1].\n\nConcerning the weaknesses noted by the reviewer:\n\n\"Notation is cumbersome.\"\n\nIn the latest revision, we have added more clarifying remarks, improved notation, and got rid of the most of the multiline formulas in the main.\n\n\"Any type of diagram or visual to accompany the proof of Theorem 2 would have been nice.\"\n\nWhile we agree with this comment, it is not obvious for us how to visualize the proof of the main theorem at the moment.\n\n\"Requirement of smooth nonlinearities.\"\n\nSmoothness requirement is strictly necessary for our proof method to work (it relies on Taylor expansion). Relaxing it seems challenging; it could constitute a good direction for future research.\n\n[1] Yang, G. (2020). Tensor programs iii: Neural matrix laws. arXiv preprint arXiv:2009.10685.",
" First of all, we thank the reviewer for valuable comments!\nBelow, we address the main concern posed by the reviewer.\n\n“The specific contribution of this paper is incremental, and the result (although apparently nontrivial) does not seem particularly surprising, so its broader significance seems limited at best.”\n\nWhile our results may not seem surprising when they are already present, we doubt these results (distribution universality) were apparent beforehand. Indeed, to the best of our knowledge, all empirical results supporting theoretical claims on infinitely wide nets applied Gaussian initialization as theory required it. However, our non-Gaussian Master theorem (Theorem 2) has the same corrolaries as the previous Gaussian one (Theorem 1), meaning that the previous theory can be readily and safely applied.\n\nMoreover, we would like to underline a small subtlety in our results that may seem surprising.\nNote that the limit in Theorem 2 does not change only if we replace the distributions of matrix entries with non-Gaussian ones, but this is not the case for the distribution of vector entries (Setup 2 requires them to be Gaussian). We can still model non-Gaussian vectors by applying a nonlinearity to Gaussian ones; that’s what we had in mind while considering the applications to neural nets, see Section 3 and Appendix A. However, swapping Gaussian vectors with non-Gaussian ones with a nonlinearity requires modifying the program, meaning that a program expressing computations in a Gaussian-initialized neural net and a program for the corresponding non-Gaussian initialized neural net can be different. Therefore they could have different inifnite-width limits.\n\nStill, Theorem 2 works and the limit exists in both cases, that’s why we do have NNGP correspondence and convergence to a kernel method (Corrolaries 1 and 2), but the distribution of vectors (e.g. biases) could affect the kernels.\n\nThis means that the limit predicted by the Master theorem is not generally invariant wrt the distribution of vector entries. We demonstrate it in our experiments with a GRU network that converges to different NNGPs when bias vectors are Gaussian and non-Gaussian; see Appendix I in the revised version of the manuscript. We have also added a small remark in the main.\n",
" We thank the anonymous reviewer for valuable comments. We begin with addressing the concerns posed by the reviewer:\n\n\"the paper is generally dense and dry, therefore challenging for non-specialists to follow.\"\n\nIn the latest revision, we have added more clarifying remarks, improved notation, and got rid of the most of the multiline formulas in the main.\n\n“The authors may consider moving the proofs to the appendix and using the space to explain more about Tensor Program and its application.”\n\nThe proofs are in the appendix already. Since Theorem 2 is the main result of our paper, in order to give the reader a gist of the proof technique, we decided to put a proof sketch for the simplest non-trivial tensor program possible, see Section 4.1. We discuss the applications in Section 3 and Appendix B.\n\n“In Section 2, readers may want to know how to convert a practical convolutional networks (with weight sharing convolutions and batch normalization) to Tensor Program. Since the authors emphasize on universality, it is also informative to discuss whether the scheme is applicable in graph neural networks, where the central limit theorem may not hold given sparsity.”\n\nOur work builds on the TP series, where representations of numerous practical architectures (including convolutional and graph neural nets) as tensor programs were discussed in detail, see e.g. [1], Appendix A.\n\n“in Section 3, it is useful to discuss how to use the theoretical result for hyperparameter tuning, instead of citing pror works.”\n\nOur results do not suggest any new techniques. Instead, one of the corollaries of our main result is that the existing hyperparameter tuning technique is universal wrt initial weight distribution. Therefore the existing technique remains the same. Discussing the hyperparmeter tuning method of [2] would mean merely re-stating their method without any modifications.\n\n\"Lastly, since the result only hold for infinite-width case, it is helpful to include some empirical study to show the gap between theory and practice.\"\n\nWe have included some empirical results on NNGP correspondence and convergence of the initial NTK in Appendix I of the latest revision.\n\nWe then focus on certain small misconceptions we have noticed in the review.\n\n\"The paper extends the Tensor Program to the non-Gaussian case, which justifies a previous hyperparameter tuning method for non-Gaussian weight initialization.\"\n\nWhile justification of the mentioned hyperparameter tuning method is probably the main practical application of our main result (Theorem 2), it has numerous other (theoretical) applications such as NNGP correspondence, convergence to a kernel method in the limit of infinite width, and the Free Independence Principle, see Section 3 and Appendix B.\n\n\"The proposed theory helps to explain some previous practices in learning neural networks; however, it does not suggest new techniques nor has empirical verification.\"\n\nWhile our theory does not suggest any new techniques, it demonstrates that the previous results are universal, meaning that exactly the same techniques (e.g. for hyperparameter tuning) should work the same way for all weight initializations - a result which is, as we believe, not obvious.\nWe empirically validate some of the applications of our Theorem 2, including NNGP correspondence and convergence of the initial NTK; see Appendix I in the revised version of the manuscript. We plan to add more in future revisions.\n\n[1] Yang, G. (2019). Wide feedforward or recurrent neural networks of any architecture are gaussian processes. Advances in Neural Information Processing Systems, 32.\n\n[2] Yang, G., Hu, E. J., Babuschkin, I., Sidor, S., Liu, X., Farhi, D., ... & Gao, J. (2022). Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer. arXiv preprint arXiv:2203.03466.",
" We would like to thank the anonymous reviewer for valuable comments!\nTo begin, we address the main question posed: \n\n“With the restrictions needed to not require Gaussian weights it's unclear why simpler extensions could not have been done. For example, Appendix section A. states that images of smooth polynomial maps can be used to get around the Gaussian initial bias and layer weights. Why could the same trick not been used in original formulation?”\n\nThe trick we used to convert Gaussian vectors to non-Gaussian ones cannot be applied to matrices simply because tensor programs do not allow applying an elementwise nonlinearity to an n x n matrix (A-variable). Indeed, the TP iteration (eq.(1)) allows for applying a nonlinearity elementwise to a finite set of vectors and to a final set of scalars, but it does not support applying it to an A-matrix.\n\nWe have to stress out that input layer and output layer weights of a neural network are not expressed with A-matrices; instead, they are expressed with a set of vectors. For example, if a network has k input neurons and width n, its input layer weights are expressed as a set of k vectors. For all layers except for the input and the output ones, we cannot do the same because for a hidden layer with n input and n output units we would need n vectors - a number that obviously depends on n, which goes to infinity. That’s why we are able to map Gaussian input and output weights to non-Gaussian ones using a nonlinear function but we cannot apply any mapping to any other weight matrix.\n\nSecond, we would like to correct several probable misconceptions we have noticed in the review:\n\n“Starting from a theoretical result showing that nearly any neural architecture with Gaussian weight initialisation in the limit of infinite width exhibits Gaussian Process behaviour, this paper extends this theorem to apply where weights have non-Gaussian initialisation.”\n\nWe start with a stronger result from the previous work, Gaussian Master theorem (Theorem 1), from which Gaussian process behavior for Gaussian-initialized neural nets follows as a corollary. Our main result is non-Gaussian Master theorem, Theorem 2, from which Gaussian process behavior for non-Gaussian-initialized neural nets follows as a corollary, Corollary 1.\n\nThe Master theorem is a much more general result then just Gaussian process behavior in the limit of inifinite width. First, a tensor program can express not only a forward pass of a neural network but also a backward pass and any number of gradient descent steps, while we need only the 1st forward pass to prove NNGP correspondence. Second, generally, vectors generated by a program do not exhibit a Gaussian process behavior; this is the case e.g. for programs expressing gradient steps. Nevertheless, vectors generated by a program expressing the very first forward pass indeed tend to Gaussians; this is NNGP correspondence, which is our Corollary 1 and the main topic of TP I [1].\n\n“This requires assuming that all matrices (A) used have iid entries from a Gaussian distribution, all bias vectors also come from an iid Gaussian distribution, and that all non-linearities be polynomially-smooth.”\n\nConcerning weight distributions, this is not true: we require all matrices in a tensor program to have iid entries from a distribution with zero mean, variance inversely proportional to width, and all higher moments existing; it does not have to be Gaussian, see Setup 2.\n\nWhen applying our result to neural nets, we express input and output layer weights as sets of vectors, while weights of all other layers are expressed as matrices. We need Gaussian output weights for Corollaries 1 and 2 to hold but all other weights can be non-Gaussian, see Setup 3.\n\n[1] Yang, G. (2019). Wide feedforward or recurrent neural networks of any architecture are gaussian processes. Advances in Neural Information Processing Systems, 32.",
" The paper generalizes the Tensor Programs framework, which computes infinite-width limits of deep networks / differentiable computation graphs under the assumption that weights are randomly initialized from independent Gaussians with variance 1/n, to the case of iid non-Gaussian initializations. The result in this work is an incremental extension of the Tensor Program framework to non-Gaussian weight initializations, and is clearly not intended as a standalone exposition of the framework. Unfortunately I have only a vague familiarity with the existing Tensor Program line of work and so can only provide superficial comments, not critique the result in any detail.\n\nSignificance: this result contributes to the theory of infinite-width limits of neural nets, which is certainly of general interest and has motivated interesting new methods that may be practical in some settings, e.g., for hyperparameter search. The specific contribution of this paper is incremental, and the result (although apparently nontrivial) does not seem particularly surprising, so its broader significance seems limited at best. Does this extension 'deserve' to be its own top-conference paper, rather than (say) an appendix to the general framework? I don't feel qualified to judge, but given the community's recent interest in this line of work I'm willing to give it some benefit of the doubt.\n\nQuality and clarity: the paper is generally well-written. Technical conditions and results are clearly stated, as are the limitations and the relationship to previous results. I did not attempt to check the proof (section 4) but it appears well-motivated, structured and carefully reasoned; I believe it is likely correct. \n\nOriginality: the proof technique appears to follow a similar approach used in other recent work (Chen and Lam 2021), but its application and extension within the Tensor Programs framework is novel and, I think, nontrivial. nits:\n- line 24: 'applications of Law of Large Numbers' doesn't parse, I would say 'applications of *the* Law ...'\n- line 41: typo 'identially' -> identically\n The technical conditions and limitations of the result are clearly articulated and discussed.",
" The authors generalize the Master Theorem of Tensor Programs (TP) by demonstrating that non-Gaussian initializations result in many of same consequences of TP including (perhaps most interestingly) the NNGP correspondence (infinitely wide neural networks behave as Gaussian processes). Differences between this result and the original Gaussian initialized TP include the following: requirement of smooth nonlinearities, and weaker convergence (in probability versus almost surely). Much of the manuscript is dedicated towards proof. Strengths:\n-The formalism introduced by TP was (and still is) of both theoretical and practical use (say, for hyperparameter tuning). Generalizing it even further is beneficial for the DL community. \n-Proof of the non-Gaussian Master Theorem is thoroughly detailed. \n-Nearly all of the consequences of the Gaussian Master Theorem are obtained. \n\nWeakness:\n-Notation is cumbersome. This is a challenging paper, indeed. Tensor Programs I (G. Yang) expends serious effort towards the development of NETSOR to aid the architectural universality result, but this paper is at times a pure exercise in analysis. Any type of diagram or visual to accompany the proof of Theorem 2 would have been nice. \n-Requirement of smooth nonlinearities (although this is addressed, several times over). 1. Why do the authors consider it useful to note that the semi-circle distribution and Marcenko-Pastur laws for non-Gaussian matrices fall out of Theorem 2? Going through tensor programs is not necessarily easier than existing proofs. What perspective/insights might deriving these results from TP have to offer? Limitations and societal impacts are discussed. ",
" The paper extends the Tensor Program to the non-Gaussian case, which justifies a previous hyperparameter tuning method for non-Gaussian weight initialization. Since I don't have a background in Tensor Programs, I can only comment on writing.\n\nThe proposed theory helps to explain some previous practices in learning neural networks; however, it does not suggest new techniques nor has empirical verification. Moreover, the paper is generally dense and dry, therefore challenging for non-specialists to follow. The authors may consider moving the proofs to the appendix and using the space to explain more about Tensor Program and its application.\n\nFor example, in Section 2, readers may want to know how to convert a practical convolutional networks (with weight sharing convolutions and batch normalization) to Tensor Program. Since the authors emphasize on universality, it is also informative to discuss whether the scheme is applicable in graph neural networks, where the central limit theorem may not hold given sparsity.\n\nMoreover, in Section 3, it is useful to discuss how to use the theoretical result for hyperparameter tuning, instead of citing pror works. Lastly, since the result only hold for infinite-width case, it is helpful to include some empirical study to show the gap between theory and practice. Not applicable.",
" This work continues in the spirit of Tensor Program series of papers. Starting from a theoretical\nresult showing that nearly any neural architecture with Gaussian weight initialisation in the limit\nof infinite width exhibits Gaussian Process behaviour, this paper extends this theorem to apply where\nweights have non-Gaussian initialisation. This requires assuming that all matrices (A) used have iid\nentries from a Gaussian distribution, all bias vectors also come from an iid Gaussian distribution, and\nthat all non-linearities be polynomially-smooth.\n Originality:\n\n This work is very original and I am not aware of any other theoretical contributions of this sort\n in the literature. \n\n Technical Quality:\n\n The work is technically sound and although I did not carefully inspect the proofs I don't have\n significant reasons to doubt the results. Although there are not experiments in the paper, given\n the theoretical nature of the contribution I don't think it's necessary.\n \n Clarity:\n\n The work relies heavily on familiarity with previous Tensor Programs papers for background and motivation\n which makes the paper and its contribution less self-contained. But given some familiarity, the paper was\n straightforward to read and follow.\n \n Significance:\n\n The paper offers to me seems a fairly modest contribution. With the restrictions needed to not require Gaussian\n weights it's unclear why simpler extensions could not have been done. For example, Appendix section A. states that\n images of smooth polynomial maps can be used to get around the Gaussian initial bias and layer weights.\n Why could the same trick not been used in original formulation?\n The paper offers to me seems a fairly modest contribution. With the restrictions needed to not require Gaussian\n weights it's unclear why simpler extensions could not have been done. For example, Appendix section A. states that\n images of smooth polynomial maps can be used to get around the Gaussian initial bias and layer weights.\n Why could the same trick not been used in original formulation?\n I believe the authors have adequately addressed the limitations and potential negative societal impacts of the work."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
1,
3
] | [
"H2qPuFyzBvs",
"5L1pMv6N3J6",
"cTRSILK1in",
"3dhIdK91_tR",
"5f4YWKe24tt",
"5Aes-X7Rm5N",
"nips_2022_AcHUIG2wA8-",
"nips_2022_AcHUIG2wA8-",
"nips_2022_AcHUIG2wA8-",
"nips_2022_AcHUIG2wA8-"
] |
nips_2022__bqtjfpj8h | Ask4Help: Learning to Leverage an Expert for Embodied Tasks | Embodied AI agents continue to become more capable every year with the advent of new models, environments, and benchmarks, but are still far away from being performant and reliable enough to be deployed in real, user-facing, applications. In this paper, we ask: can we bridge this gap by enabling agents to ask for assistance from an expert such as a human being? To this end, we propose the Ask4Help policy that augments agents with the ability to request, and then use expert assistance. Ask4Help policies can be efficiently trained without modifying the original agent's parameters and learn a desirable trade-off between task performance and the amount of requested help, thereby reducing the cost of querying the expert. We evaluate Ask4Help on two different tasks -- object goal navigation and room rearrangement and see substantial improvements in performance using minimal help. On object navigation, an agent that achieves a $52\%$ success rate is raised to $86\%$ with $13\%$ help and for rearrangement, the state-of-the-art model with a $7\%$ success rate is dramatically improved to $90.4\%$ using $39\%$ help. Human trials with Ask4Help demonstrate the efficacy of our approach in practical scenarios. | Accept | I thank the authors for their submission and active participation in the discussions. This paper introduces a method for learning a policy that can ask an expert for help, i.e., to obtain the expert action. On the positive side, reviewers found the method to be general [uya8], original and significant [gw2r], intruiging in terms of being able to reuse an existing policy [HGan], and tackling an important problem [rsmr,bRWC], and the paper to be clear [uya8,gw2r,rsmr,bRWC]. In terms of negative points, reviewers were concerned about the novelty [bRWC], unimpressive qualitative results despite strong quantitative results [bRWC], and issues with the range of baselines [bRWC,rsmr] and ablations considered [uya8]. Overall, the paper is borderline. However, bRWC indicated they would raise their score but I don't see this being reflected. Furthermore, in my view reviewer rsmr's concerns regarding baselines and ablations has been addressed by the author rebuttal. Thus, I am siding with reviewers gw2r and HGan, and recommend acceptance. However, I very strongly encourage the authors to further improve their paper based on the reviewer feedback, in particular the points raised by reviewer bRWC regarding the importance of the Success Prediction component of the method. | train | [
"HmoDcek1dX",
"tMcswrMk0Gb",
"lsQwtRvLnSh",
"XwTs8qhnP1",
"fbrUKeh9aNI",
"Xe7McnXJP5m",
"pkHnXdaX6_C",
"CmTCm7EeCk",
"mjdB5LyCNaE",
"8aVwUwqqNxL",
"sL-0zxzb5tT",
"wdfdDPaECxzq",
"s_nn2Lz6219",
"FbsxKgWxL8",
"ojIdQu2Vz-1",
"tIaCYCU5AtD",
"_Ro-f-VRKCe",
"xcBWyJZdKcT",
"IuolXUqfaKp",
"tzew69pMfGP",
"ShwB63hMO8L",
"kUQGOPIQkV9"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are happy to hear that our rebuttal has addressed some of your concerns and thank you for increasing your rating of our work.\n\n**Therefore, I'm increasing my score to a borderline, hoping that the authors would clarify more on the potential of the current method as a building block for future studies.**\n\nWe see two great future directions which can leverage our Ask4Help module to further improve embodied models:\n\n- Ask4Help can be additionally used as a way of highlighting the limitations of a model, and benchmark how far our off-the-shelf models have come based on the amount of help asked (12% for pre-trained agent v/s 98% for a random agent). For instance, in rearrangement, the Ask4Help policy asks for expert supervision for navigation v.s. interaction actions with ratio of 6:1, this suggests that navigation actions may be more uncertain than interaction actions. This idea is quite general: if we identify some set of potentially relevant features that we believe may contribute to model failure (e.g. mis-recognition, exploration failure, etc) we can use simple statistical models to determine how these features relate to cases where the Ask4Help model queries the expert. For instance, this [plot here](https://anonymous-neurips22.s3.us-west-2.amazonaws.com/a4h/object_type_help_analysis.png) shows amount of help needed per object category, which can tell us which object types are mis-recognized or hard to navigate to. This then provides guidance as to how we can improve existing models.\n\n- As you and HGan have noted, we can use the expert supervision provided at inference time (perhaps in a federated learning approach) to update the existing model iteratively. Some exploratory experiments of ours suggest that this is not as simple as simply using imitation learning with the given supervision as catastrophic forgetting can occur: continuous learning is a challenging domain with a highly active community that attempts to tackle just these types of problems. We should note that there are some other interesting ideas regarding how one might reuse the queried expert actions (e.g. a semi-parametric memory that stores the expert actions and recalls them when necessary).\n\nBoth of the above directions are interesting problems that, in our view, would likely lead to their own paper. We believe that our paper makes a useful contribution and serves as a good foundation for these future directions.",
" Thank the authors for the clarifications. The rebuttal addressed my concerns about experimental settings and baseline methods. However, I still feel the online learning paradigm (learning from asking) is crucial in the whole pipeline, as simply choosing between two sets of frozen policies can intuitively face severe problems in new tasks and less expert knowledge. The current contribution of this paper is limited to when to ask, but one intuitive next step is to follow this line of thought and make some analysis on what the frozen policies failed or expected to help the most to provide some hints for future study in embodied AI. Although, as mentioned by reviewer HGan, iterative updates can be easily adopted, the current prompting of frozen policy does not provide much insight into what challenges we might face. Therefore, I'm increasing my score to a borderline, hoping that the authors would clarify more on the potential of the current method as a building block for future studies.",
" - **It is possible that a task progress detector can be enough for the rearrangement task given the nature of the rearrangement task is to put multiple objects instead of one object.**\n\nIt is not clear that designing a progress monitoring module for rearrangement is an easier problem than designing an Ask4Help module. Indeed an Ask4Help module needs to only detect one event, that the agent has reached a state where it cannot complete the task alone, while a task progress monitor must have a holistic understanding of the agent's abilities and the environment state throughout the episode. Given the present stream of work, we believe our contribution is a useful one that allows us to reuse existing Embodied AI models. \n\t\nAdditionally, it is important to note that [rearrangement](https://ai2thor.allenai.org/rearrangement/) is a very complex task that just does not comprise object navigation to multiple objects. It involves interacting with objects, which also includes understanding object affordances and the underlying physics. Additionally, the order in which objects are interacted with is very important since object locations can conflict with each other. \n\nWe would also like to note that, Ask4Help can also give insight on the failure modes of a task. Building on this insight the amount of help asked can be an indicator on how far our existing off-the-shelf have come in terms of autonomous applications. \n\n- **Given the ablation result in the object navigation, one solution is to train a task progress regressor and have different thresholds based on the reward profile.**\n\n\nWe would like to clarify that, even for object navigation, setting the threshold of task progress regressor according to a particular user preference would require access to the unseen validation scenes, however our method generalizes to unseen scenes without ever seeing them. Additionally, this solution would not generalize to tasks of increasing complexity where predicting task success at every timestep is non-trivial.\nAdditionally, as discussed above regarding progress regression in rearrangement, this solution may be strictly more challenging to implement than our proposed method.\n\n \n - **If a reward profile is not in the training set, the model cannot handle it.**\n\nYes the reward embedding is based on a lookup table, note that we do not claim that it generalizes out of the defined reward profiles. However, the learned policy transfers seamlessly to unseen validation scenes and is able to produce varying behaviors when we vary the reward profiles. We provide options from -1 to -30 which covers a broad range of user preference from almost no help to very liberal help. The 30 reward profiles consider a very wide spectrum of behaviors with performances ranging from ~65% to 93%. \n\nAdditionally, it is also possible to train Ask4Help with more reward profiles if the current set of 30 options are not enough. Generalizing to reward profiles outside the defined ones is something we did not experiment with, since from a user perspective it makes sense to have all possible range of options, rather than trying to generalize for out-of-distribution reward profiles. \n\nWe thank the reviewer for their comment about success prediction. We agree that belief is sufficient to learn a good Ask4Help policy and would update our draft to reflect that. However, our core framework of learning a task-agnostic policy to provide expert help is still applicable and useful for future models and tasks. \n",
" We thank the reviewer for their time and response to our rebuttal. The suggested experiments were very insightful and have certainly improved our work. We address the follow up questions below. \n\n- **What are the task and the success rate of the pre-trained agent?**\n\nFor object navigation, the pre-training task is object navigation, and the success rate is 49%. For rearrangement, its rearrangement and the success rate is 7%. We use the EmbodiedCLIP [25] models from Khandelwal et al. as the pre-trained policies. \n\n- **I find the EP metric here to be somewhat misleading: an increase of 10 expert steps (from 17 to 27, a 58% relative increase) makes this metric go from 12% to 99%. Is this because Ask4Help with a pre-trained policy learns to wait before asking for help due to the reward structure? Why doesn't this happen with the random policy?**\n\nGreat question! This is due to the episode lengths being different for the agents. Since the random agent is very bad at doing the task, the expert takes over very early on, and tries to take the shortest path and hence the episode is terminated sooner as compared to a pre-trained agent. The pre-trained agent explores the scene so as to attempt to solve the task on its own.\n\nThe Ask4Help reward structure encourages the agent to complete as much portion of the task autonomously as possible. This does not happen with random policy since it is very bad at performing the task, the only way to reach a favorable performance trade-off is for the expert to do most of the task. Whereas, the pre-trained agent has a success rate of 49% which makes it very good at completing a major portion of the task, so Ask4Help holds off expert help until it is absolutely needed. \n\nHowever, it is important to note that Ask4Help does not necessarily always wait for a long time, it can provide some help early in the episode.\n\n- **I am surprised by the low scores here, as well as by the fact that the success rate is mostly determined by the value of N. [This plot](https://anonymous-neurips22.s3.us-west-2.amazonaws.com/a4h/min_distance_to_tgt_curve.pdf) shows that the initial distance to the target is 10, why is the success rate so low for values of N such as 20 or 30?**\n\nNote that the initial distance to the target is not always 10 meters, as it can be seen in the curve, it can have values in the range of 0-10. \n\nThe success rate is mostly governed by N since this heuristic does not allow the expert to intervene in agent exploration at all. \n\nFor the low scores, we believe that the underlying pre-trained agent generally requires more time to explore and converge closer to the object so it might go to a far off location initially after 20-30 steps, recovering from that and finding the target might require more than 30 expert steps. It is also a possibility that the agent requires some help during this exploration if it gets stuck or is looping in one particular region. This is also an indication that it is ideal if the pre-trained is allowed to explore autonomously as much as possible, it can explore most of the space and usually requires help in getting around immovable obstacles or general recognition failures. \n\n- **Finally, the values chosen for M and N seem to differ substantially from what the Ask4Help policy generally does (with M=157 and N=17). I wonder if authors have experimented with values for M and N in that ballpark.**\n\nWe tried an experiment with the value of M=160 and N=40, and that achieved a success rate of 72.8 (as compared to Ask4Help which gets 86.3 with M=157 and N=17). We believe this is an interesting insight as it tells us that naively allowing the agent to go for a certain amount of steps, and then just using the expert is also not as effective as our learned Ask4Help Policy that learns to combine the pre-trained agent and expert in a way that is very performant. We'll include this in the updated draft. We thank the reviewer for this comment. \n\n- **Impact of Success Prediction**\nThe insight that belief is sufficient is a very useful one and we assure the reviewer that we will update our draft to reflect that accordingly. It is a benefit of using OpenReview and engaging in active discussions with reviewers. \n\nHowever, we would like to highlight that the core contribution of our work, learning an auxiliary-policy that can enable reusing off-the-shelf embodied AI models, is useful and important as you've pointed out. \n",
" Thank you for the positive comments and feedback. \n",
" We thank the reviewer for their comments on our response. \n \n- **I like how the Ask4Help policy can be used to highlight a pre-trained agent's limitations.**\nThat is a great point and we thank the reviewer for highlighting this. We would certainly include this in the main draft and it will hopefully allow users to get a deeper insight into their pre-trained agent and tasks. \n\n\n\n- **an Ask4Help system can still collect data examples at the boundary of the frozen agent policy's capabilities.** \nThanks for this pointer, and we agree collecting data that can help identify failure mode, then eventually fine-tuning the deployed policy is an interesting direction, and as suggested, is one of the positive merits of Ask4Help. \n",
" Thank you for the rebuttal and for addressing my concerns. \n\nAfter reading the other reviews, I found that reviewer bRWC raised valuable points, especially regarding quantitive/qualitative results. Thank you for rerunning some experiments with the suggested baselines/ablations.\n\n> On the validation scenes, we find that the expert provides more navigation actions than interaction actions with a ratio of 6:1.\n\nWhile it's not surprising that navigation actions are more uncertain, I think it speaks more about the limitation of the pretrained agent. I like how Ask4Help policy can be used to highlight a pretrained agent's limitations.\n\nBriefly, regarding one of reviewer rsmr's comments:\n> proposed model possesses the disadvantage that it can not learn from the expert's feedback\n\nNot everything has to be end-to-end learning, research that promotes models' reusability is very important. Additionally, iterative learning could be a viable strategy here. For instance, while being deployed in production, an Ask4Help system can still collect data examples at the boundary of the frozen agent policy's capabilities. Then, update the frozen agent policy and redeploy.\n",
" Thanks for the detailed response, it addressed some of my major concerns (e.g. about the data split). I will update my score accordingly, but I would like to ask some follow-up questions first.\n\n**Baseline 1: Ask4Help with random agent**\n\nWhat are the task and the success rate of the pre-trained agent? I find the EP metric here to be somewhat misleading: an increase of 10 expert steps (from 17 to 27, a 58% relative increase) makes this metric go from 12% to 99%. Is this because Ask4Help with a pre-trained policy learns to wait before asking for help due to the reward structure? Why doesn't this happen with the random policy? The SPL metric is doubled when replacing the pre-trained agent with a random one; if I understood this correctly, this means that the agent does not wander around so much.\n\n\n**Baseline 2: hard-coded meta-controller**\n\nI am surprised by the low scores here, as well as by the fact that the success rate is mostly determined by the value of N. [This plot](https://anonymous-neurips22.s3.us-west-2.amazonaws.com/a4h/min_distance_to_tgt_curve.pdf) shows that the initial distance to the target is 10, why is the success rate so low for values of N such as 20 or 30? Finally, the values chosen for M and N seem to differ substantially from what the Ask4Help policy generally does (with M=157 and N=17). I wonder if authors have experimented with values for M and N in that ballpark.\n\n\n**Impact of success preditiction**\n\nIf I understood replies to other reviews correctly, it looks like success prediction does not play a very important role in the final performance of the agent (i.e. the belief state is enough). Since success prediction is presented as an important part of the proposed method, but seems to have little to no effect on the results, the paper should be re-written and updated to reflect this finding. I am concerned about the magnitude of the change required, which might be too big for a rebuttal period.",
" Thanks for addressing my flagged points -- I'll keep my current score! Hoping that my fellow reviewers are also willing to see the merits of this work!",
" I would like to thank the authors for adding the ablations and the clarification of the reward profiles.\n\n- It is good to see success rate prediction is not enough for the rearrangement task, but it is possible that a task progress detector can be enough for the rearrangement task given the nature of the rearrangement task is to put multiple objects instead of one object. Also, since it doesn't show any difference when masking out success rate and the belief in object navigation, it seems that we don't need the success rate predictor at all. Only the belief is sufficient to decide if the model should ask questions. \n\n- Based on the clarification of the reward profile, the embedding is a lookup table for mapping the selected reward profile to whether to ask a question. If a reward profile is not in the training set, the model cannot handle it. Given the ablation result in the object navigation, one solution is to train a task progress regressor and have different thresholds based on the reward profile. \n\nGiven these concerns, I would keep my rating.",
" We thank the reviewer for insightful and detailed comments. We appreciate the positive comments on the importance of the problem, significant performance improvements, strong potential, and clarity of writing. We will now respond to the comments and questions raised in the review.\n \n- **Clarification on motivation of problem setting**\n \nThe ability to ask for help is important with or without test-time adaptation. Clear examples of this are any safety-critical tasks; for instance, currently deployed self-driving cars have proprietary heuristic mechanisms for detecting when the system is uncertain and hand control back to the expert human driver. Beyond these safety-critical settings, asking for help can be used to take a model that is not ready for real-world deployment, because its performance is too low, and immediately make it deployable by injecting expert help. While the area of test-time adaptation is very exciting, it is still a relatively immature technology.\n\nIt is worth mentioning also that there are many potential deployed systems where test-time adaptation may be beyond the limitations of the available hardware (e.g. a cpu-bound house cleaning agent).\n\nBuilding self-adaptive agents is an interesting research question and our work is a step towards achieving that goal. It is part of the future work that we’re considering. However, having the ability to prompt off-the-shelf Embodied AI models in an efficient manner with expert knowledge is a useful tool given that test-time adaptation is still an active problem.\n \n- **Comparison with a constrained amount of help.**\n \nWe performed an experiment where we allow a maximum of only **20** expert steps in an episode (both during training and evaluation) and train an Ask4Help policy and present a comparison with the model confusion baseline. On the unseen validation scenes, Ask4Help achieves a success rate of **70%**, whereas the model confusion baseline we present in the draft (Section 4.2) achieves **61%** success on object navigation. We believe this is a useful insight and would happily include it in the updated draft. Thanks for the suggestion.\n\n- **Success Prediction Ablation**\n \nIn particular, when we mask out the belief input for Rearrangement, we find that the Ask4Help policy simply learns to request expert help 97% of the time, which is very undesirable from the user perspective. This is due to the complex nature of rearrangement which makes it much harder for success prediction to capture the modes of failure where the agent might require assistance. However, for simpler tasks such as object navigation, success prediction provides an effective learning signal, since it is able to capture failure situations better.\n\nWe believe this indicates that having the agent belief as an input allows our method to be generally applicable to tasks of varying difficulty, especially for complex ones like Rearrangement where purely relying on success prediction may not be enough.\n\n- **Upper bounds of result is not 100%**\n\nThe expert available in RoboTHOR is not perfect and has some edge cases where it fails. Notably in cases where the object is on top of some cabinets or shelves and is difficult to bring in view. Hence, it does not achieve 100% success on validation scenes, but these cases are very rare.\n\nSimilarly for rearrangement as shown in Table 2, we use a heuristic expert and the upper bound on expert performance is 92.8% since it also has some edge cases where it fails.",
" - **There are some additional baselines that would help understanding whether the Ask4Help policy is learning something trivial or not**\n \n\nWe discuss the suggested baselines and our findings below.\n\n\nBaseline 1: Replacing pre-trained agent with Random/No-op agent - We implement this baseline and train the Ask4Help module with a random underlying agent. Note that the Ask4Help attempts to maximize task performance with minimal expert help.\n\nThe results on unseen validation scenes is as follows:\n\n\n| Model setting | SR | SPL | EP (%) | Number of Expert Actions |\n|------------------------------|-------|-------|--------|--------------------------|\n| Ask4Help (pre-trained agent) | 86.3 | 33.2 | 12.31 | 17 |\n| Ask4Help (random agent) | 76.44 | 67.65 | 98.99 | 27 |\n\n\n\nIn the case of a random agent, since its object navigation performance is really low, the Ask4Help policy converges to asking for very high expert help (27 expert steps on average per episode) to prevent task failure and still achieves only 77% task success.\n\nWith a pre-trained agent (in this case Khandelwal et al. EmbodiedCLIP[25]), Ask4Help can achieve 86% task success with just 12% expert help. The pre-trained model is playing a good part in achieving high performance complimentary to the expert help.\n\n \n\nBaseline 2: Replace meta-controller with hard-coded policy -\n\nWe implement the suggested hard coded policy, we run the pre-trained agent for M steps and then the expert for N steps, after **M+N** steps, the episode ends. For both M and N, we try the following values, [10,20,30,40]. \n**M - Number of agent steps, N - Number of Expert steps.**\n\n \n\n| M ↓ \\ N → | 10 | 20 | 30 | 40 |\n|:----------:|:----:|:-----:|:-----:|:-----:|\n| **10** | 23.8 | 46.2 | 66.2 | 82.3 |\n| **20** | 28.2 | 48.67 | 68.72 | 72.72 |\n| **30** | 30.7 | 49.3 | 68.5 | 81.4 |\n| **40** | 33.7 | 50.1 | 68.2 | 80.9 |\n\nA policy trained with Ask4Help uses an average of 17 expert steps and 157 agent steps, and achieves a success rate of 86.3.\n\n- **Success Prediction as an important component of the agent?**\n \nBased on experimentation, we find that success prediction provides a useful way of capturing failure modes of simpler tasks like object navigation and can provide a strong learning signal. However, for complex tasks like rearrangement, it is difficult to capture failure situations with a simple success classifier. This motivates us to give the agent belief as input, which is important to generalize to more complex tasks.\n\n \n\n- **After following the expert, the agent could find itself in states where it is out of distribution.**\n \n\nYes, this is a possibility, and could be a potential drawback of freezing the pre-trained agent. However, since we are training the Ask4Help policy with RL to intervene with expert actions, it could potentially learn to provide help in situations that would not push the agent into unknown states.\n\nThis is an interesting consideration for future work, and we will modify the draft to include discussion on this.\n\n \n\n- **Section 3.3 describes how Ask4Help can be trained with multiple user preferences…**\n \n\nWe clarify this in the [common response here](https://openreview.net/forum?id=_bqtjfpj8h¬eId=_Ro-f-VRKCe).",
" - **Method as Instantiation of hierarchical RL**\n \nThe analogy of Ask4Help being the high-level policy, and the underlying task agent and expert being the low-level policies can be seen as Hierarchical RL. However, in our case, neither the expert nor the underlying task agent is trained. We’ll clarify this distinction in the related works updated draft.\n\n- **Clarification on Dataset Split.**\n \nYes, there is a held-out **unseen validation** split that both the pre-trained agent and the Ask4Help policy have not seen or been trained on. All the results presented in the draft are based on this unseen validation set.\n\nYour understanding, that the purpose of the split is to deploy the agent on a set of tasks it has never seen before and where it might not perform well, is correct.\n\nWe do precisely this: we train our Ask4Help policy on the 25% data from the **training** scenes and perform evaluation on the held-out **unseen validation** scenes which neither the pre-trained agents nor the Ask4Help policy have seen before. The fact that the Ask4Help policy shows good results in those **unseen validation** scenes is indicative of its generalization.\n\n- **Expert usage statistics, pretrained agent’s role.**\n\nThe underlying pre-trained agent is an off-the-shelf state-of-the-art ObjectNav agent which achieves 50% success in validation unseen scenes on RoboTHOR. We find that from 1800 episodes in unseen validation scenes, the Ask4Help policy does not provide expert intervention in 619. Out of these 619, 486 end up being successful (78.5%). The pretrained agent is performing a significant portion of the episodes autonomously. We show a [histogram here](https://anonymous-neurips22.s3.us-west-2.amazonaws.com/a4h/agent_without_expert.pdf) of the number of steps the agent takes in these 619 trajectories where Ask4Help does not invoke the expert.\n\nAdditionally for cases where the Ask4Help policy does request expert help, we did some investigation to show that the underlying pre-trained agent is doing non-trivial exploratory work before the Ask4Help policy requests expert help. We plot the closest distance to the target object for the pre-trained agent before it asks for help against the distance of the object from the agent's initial position. We plot the same curve for a random agent for the same episodes on the unseen validation set. The curve can be found here: [plot here](https://anonymous-neurips22.s3.us-west-2.amazonaws.com/a4h/min_distance_to_tgt_curve.pdf). As the curve indicates, the pre-trained agent gets closer to the target object (before asking for help) more frequently than a random agent.\n \n- **Stats about time steps in which expert is used**\n \nWe present a plot showing the number of steps before the expert is invoked for the first time by our Ask4Help policy with a pre-trained agent and a random agent. The plot is [here](https://anonymous-neurips22.s3.us-west-2.amazonaws.com/a4h/expert_timestamp_plot_absolute.pdf). The Ask4Help module with the pretrained agent frequently waits many time steps before invoking the expert whereas Ask4Help with the random agent will almost always request expert help within the first 5 steps.\n\n\n- **Discussion on emergent behavior**\n \nThe emergent behavior is quite diverse. For instance, in one example, the agent has to go around a chair to get a good view of the laptop, and the Ask4help policy intervenes to get the agent around that situation. In another case, it fails to find the Alarm Clock because it is on top of the cabinet and hence out of view. The expert help allows the agent to look up and successfully finish the task. Note that, as discussed in Section 4.3.2, the $r_{init\\\\\\_ask}$ penalty discourages the Ask4Help policy from requesting help at all during the episode and allows autonomous operation. Therefore, this emergent behavior is not surprising as the Ask4Help policy allows the agent to attempt the task for a reasonable period before requesting expert help, which in some cases can be the entire episode itself. As suggested, we also present a plot showing at which time step the expert takes over for the first time [here](https://anonymous-neurips22.s3.us-west-2.amazonaws.com/a4h/expert_timestamp_curve_absolute.pdf).\n\n ",
" We thank the reviewer for the insightful comments. We appreciate the positive feedback on the originality and significance of the work, convincing evaluations, well-chosen ablations, and clarity of the paper. Discussion of the raised questions follows.\n\n \n- **Discussion on user preference components and how the embeddings are specified.**\n \n\nWe clarify this in the [common response here](https://openreview.net/forum?id=_bqtjfpj8h¬eId=_Ro-f-VRKCe).\n\n- **How are the reward preferences chosen? Is it right now just a weight on the ultimate success rate?**\n \nDuring evaluation, as discussed in the [common response here](https://openreview.net/forum?id=_bqtjfpj8h¬eId=_Ro-f-VRKCe), we can provide the user preference as an input. It can be seen as a way of specifying how costly task failure is to the user (or, alternatively, how willing a user is to be bothered with a request for help).\n\nWe specify the reward function as a tradeoff between the cost of failure and cost to ask for expert help. It attempts to balance the cost of failure and cost to request expert help to reach a favorable trade-off. During evaluation, as discussed in the [common response here](https://openreview.net/forum?id=_bqtjfpj8h¬eId=_Ro-f-VRKCe), we can provide the user preference as an input.\n \n\n- **Is there a way to regress more versatile/natural preferences from users online?**\n \nCurrently we allow choosing a reward configuration index from 1:30 to specify user preference, like a control knob. One interesting future work could be getting user preferences in the form of natural language.\n\n- **Suggestions for future work**\n \n\nThank you for the recommendation on future work. Adapting the underlying policy is a challenging yet interesting problem that many people in the community are working on.\n\nWe’re excited to look into these frameworks and see how future works build on our approach to adapt the policies based on expert feedback received.",
" We thank the reviewer for the insightful feedback. We appreciate the positive comments about the paper that it presents an important research direction with positive impact.\n\n \nWe address the questions and concerns below.\n\n \n\n- **What reward configuration is used at inference time? What is the distribution of the reward configurations?**\n \n\n \n\nFor object navigation, in Section 4.3.2, we train the Ask4Help policy with multiple reward configurations as described in [common clarification here](https://openreview.net/forum?id=_bqtjfpj8h¬eId=_Ro-f-VRKCe). Note that the training time for a single reward and multi-reward setting is similar. For inference, we vary the user preference by giving different reward configuration embeddings and generate different Ask4Help behaviors with a single policy. Specifically, for results in Figure 3, we use reward configurations corresponding to $r_{fail} \\in \\\\{-5,-7,-11,-13,-19,-21,-23,-30\\\\}$.\n\n \n\n- **In 4.4.3, it is said \"If more performant models are proposed for the task, the Ask4Help policy will accordingly adjust the amount of expert help requested...\".**\n \n\n \n\nWe apologize for the confusion, your understanding is correct. The success prediction module and Ask4help policy would require to be retrained. What we intended to convey was, depending on how good/bad the new model is, the Ask4Help policy will accordingly converge on a favorable trade-off for expert help and task performance.\n\n \n\n- **In the RoomR environment, did the authors investigate what type of action (navigation vs. interaction) provided by the expert was the most common?**\n \n\nOn the validation scenes, we find that the expert provides more navigation actions than interaction actions with a ratio of 6:1.\n\n- **Are the embeddings for the reward configuration learned?**\n \n\nYes, they are trained as described in the [common clarification here](https://openreview.net/forum?id=_bqtjfpj8h¬eId=_Ro-f-VRKCe). This training is what enables it to adapt to different user preferences at test-time.\n\n \n\n- **Are the authors going to release the code and trained ask4help policy so the community can reproduce the results?**\n \n\nThe code to reproduce all the presented results is attached in the supplementary material. We plan to do a public release of our full code and models to the community at the end of the anonymity period.\n\n- **Use of proxy experts to train the Ask4Help policy**\n \n\nAs you note, human-in-the-loop training is an exciting and active area of research which we do not attempt to tackle in this work. That said, we were quite happy to see, recall our human expert evaluation Table 1, that our approach can generalize well to the use of human experts at inference time despite being trained with proxy experts.",
" We thank the reviewer for the insightful feedback. We appreciate the positive comments about the method being generic and outperforming the baselines, and the clarity of the presentation. We will now respond to the highlighted questions and concerns.\n\n \n\n- **Missing ablation on key components.**\n \n- **The success rate is a strong indicator of whether the embodied agent needs help. It is likely that a classifier based on success rate is sufficient.**\n \n\n \n\nWe appreciate the reviewer’s comment. We ran an ablation and observed some interesting results.\n\n\nFor the task of Object Navigation, given its simple nature, the success prediction network is able to capture the characteristics of failures reasonably well. We find that by masking out either the belief input or the success prediction input (but not both), we achieve similar results to the ones presented in the paper. Although, we would like to point out that the success prediction uses belief as an input.\n\n\nHowever, when we try masking out the belief input for Rearrangement, we find that the Ask4Help policy simply learns to request expert help 97% of the time, which is very undesirable from the user perspective. This is due to the complex nature of rearrangement which makes it much harder for success prediction to capture the modes of failure where the agent might require assistance.\n\n\nWe believe this indicates that having the agent belief as an input allows our method to be generally applicable to tasks of varying difficulty, especially for complex ones like rearrangement where purely relying on success prediction may not be enough.\n\n \n\nWe’ll include this discussion in the updated draft.\n\n\n- **Unclear how to adapt to different reward preferences at inference time.**\n \n\nWe clarify how we train with multiple reward configurations in the above common response [LINK to the common response].\n\n \n- **Ask4Help trains the model with a sample of different reward configurations, but the paper doesn’t show how to estimate the expert’s reward configuration at inference time. Do experts select which reward profile they want?**\n \n\n \n\nThe expert indeed selects the profile that they prefer and this can be easily adjusted at inference time if the expert finds the system is asking for too much (or too little) help. As for the last question, further details can be found at [common clarification comment here](https://openreview.net/forum?id=_bqtjfpj8h¬eId=_Ro-f-VRKCe).\n\n \n \n\n- **It will be helpful if the authors can clarify on data efficiency vs. the performance of the Ask4Help model.**\n \n\n \n\nYes, we use 25% of the training scenes for Ask4Help training. We present ablation over how the performance varies when we use just 10% of the training scenes. We train two Ask4Help policies, one with 25% of training scenes like we present in the draft, and one with 10% training scenes. We use the following reward configuration for both cases, $r_{fail} = -10$, $r_{init\\\\\\_ask} = -1$ and $r_{step\\\\\\_ask} = -0.01$. EP represents expert proportion, an indicator of the amount of expert help is used. The results are as follows:\n\n \n\n| Model setting | SR | SPL | EP (%) |\n|-----------------------------|-------|-------|--------|\n| Ask4Help (25% train scenes) | 86.3 | 33.2 | 12.31 |\n| Ask4Help (10% train scenes) | 84.44 | 32.74 | 12.1 |\n\n\n\nAs these results suggest, Ask4Help can learn a fairly good policy with even 10% of the training scenes, which allows us to use it for tasks where such training data might be scarce.\n\n \n\n- **If the reward configuration represents the user's preferences for failure, a simple model can threshold the predicted success rate based on the selected user preference and have similar results.**\n \n\n\nAs we clarify in the “**Missing ablation on key components**” response, the success prediction module works reasonably well for simpler tasks but fails for complex tasks.\n\n \n\nAdditionally, picking the right threshold for the given user preference requires access to the validation set to run a hyperparameter search, whereas learning this correspondence as we describe in the [common clarification comment here](https://openreview.net/forum?id=_bqtjfpj8h¬eId=_Ro-f-VRKCe). directly generalizes well on previously unseen scenes. Please see [common clarification comment here](https://openreview.net/forum?id=_bqtjfpj8h¬eId=_Ro-f-VRKCe). for clarification on how this is learnt.",
" As suggested by Reviewers **uya8, HGan, gw2r and bRWC**, we provide some clarification regarding how we adapt to different user preferences at test time.\n\n\nTo reiterate our reward configuration is\n \n$r_t = r_{fail} + 1_{ask} \\cdot r_{init\\\\\\_ask} + r_{step\\\\\\_ask}$ as mentioned in Section 4.3.2.\n\n\nFor instance, one configuration that we use for training is $r_{fail} = -10, r_{init\\\\\\_ask} = -1$ and $r_{step\\\\\\_ask} = -0.01$. Let's refer to this configuration as $R_{-10}$.\n \n\nNow, as mentioned at the end of Section 4.3.2, if we wish to generate multiple reward configurations we can vary $r_{fail}$ to different values and generate different reward configurations.\n\n \n\nSpecifically, if we set $r_{fail} = -1$, which would imply that the cost of failure is not very high for the user, we get a reward configuration $R_{-1}$ with $r_{fail} = -1$, $r_{init\\\\\\_ask} = -1$, and $r_{step\\\\\\_ask} = -0.01$.\n\n\nFollowing the same trend, we vary, $r_{fail}$ from -1 to -30 to cover a broad range of user preferences and generate rewards configuration namely $R_{-1}$ to $R_{-30}$.\n\n\nTo train an agent with all reward configurations $\\{R_{-1}, \\ldots, R_{-30}\\}$, we uniformly sample a reward configuration $R_{i}$ for each episode in the environment. The reward the agent receives during that episode is based on $R_{i}$.\n\n\nDuring validation, we do not have access to these rewards, therefore we need to learn a correspondence between user preference and agent behavior.\n\nTo accomplish that, we associate each reward configuration with an index (e.g. $-1 \\to R_{-1} \\to r_{fail} = -1$), and embed these configuration indices using a standard lookup in a learnable embedding matrix. We provide this embedding as an input to the agent. This allows the agent to learn a correspondence between this embedding input and the cost of failure.\n\nThis correspondence is what enables the agent to modify its behavior according to user preference. If the agent is given the embedding corresponding to a high cost of failure then it should request more expert help to assure higher task success.\n\n \n\nFor results in Figure 3, we train a single policy randomly sampling the reward structure from $\\{R_{-1},\\ldots, R_{-30}\\}$. During inference we simply give different reward configuration indices ranging from $\\{-1,\\ldots, -30\\}$ and the learnt Ask4Help Policy adjusts its behavior accordingly.\n\nWe’ll include this discussion in the updated draft to ensure further clarity, thanks to multiple reviewers for suggesting this.",
" This paper proposed a method to decide when to ask for expert’s help to improve the task performance of embodied agents. Instead of using heuristics, e.g. model confusion, or expanding the action space of the embodied agent, this paper learns a separate policy on top of the decisions of off-the-shelf embodied agents using reinforcement learning. The experiments on RoboTHOR show that the proposed method can achieve higher success rates using fewer expert queries. Strength:\n- The proposed method is generic. Without modifying the trained embodied agent, the method can learn when to perform expert queries.\n- The proposed method reduces the number of expert queries compared to baselines such as model confusion and naive helper.\n- The presentation of the paper is clear. It is easy to follow the paper. \n\nWeakness:\n- Missing ablations on different key components. There are two major inputs for the Ask4Help model, success rate prediction and the embodied agent’s belief, it is unclear the contribution of each component. The success rate is a strong indicator of whether the embodied agent needs help. It is likely that a classifier based on success rate is sufficient.\n- Unclear how to adapt to different reward preferences at inference time. Ask4Help trains the model with a sample of different reward configurations, but the paper doesn’t show how to estimate the expert’s reward configuration at inference time. Do experts select which reward profile they want? It is also possible that an expert adjusts their reward preference while interacting with the system. - While the paper claims that it can learn efficiently with a fraction of training data, the experiment setup still suggests that it takes 25% of the training scenes to train the Ask4Help model. This is not a small fraction. It will be helpful if the authors can clarify on data efficiency vs. the performance of the Ask4Help model.\n- How do different reward configurations affect the number of questions asked? If the reward configuration represents user’s preferences for failure, a simple model can be thresholding the predicted success rate based on the selected user preference and have similar results. This paper has discussed the main limitation of the proposed method which provides opportunities for future work. Another limitation of the proposed method is the fixed user preference, the method doesn’t adapt the number of questions asked based on the interactions with experts.",
" This paper is about learning an ask for help (ask4help) policy on top of an already trained embodied agent. Specifically, the ask4help policy first measures the agent's uncertainty at finishing the given task based on its internal belief state using a pretrained Success Prediction Model (SPM). Then, the ask4help policy decides if it should ask for the expert's help (i.e. the next action to do in the environment) or use the pretrained embodied agent's predicted action. In addition, the ask4help can be adapted at inference time to different users' preference regarding the frequency of being \"disturbed\" by the agent to answer it. Experiments were conducted on two different environments: the RoboTHOR Object Navigation and the Room Rearrangement in iTHOR. Empirically, it was shown that ask4help manages to yield a very higher success rate while requesting the least amount of expert feedback compared to different baselines: the Embodied Agent only, a Naive Helper with predefined frequencies for asking for help, and Model Confusion that measures the agent's uncertainty based on the confidence of the predicted action rather than relying on the agent's belief state.\n **What I like about this paper**\n- How to reuse existing trained models is an important research direction with positive environmental impact.\n- The authors propose a solution to train a single ask4help policy that can deal with different user preferences via sampling different reward configurations during training.\n\n**Potential weaknesses**\n- It is not clear to me what reward configuration is used at inference time? Section 4.3.2 and 4.4.2 do mention a **single** reward configuration used for training.\n- In 4.4.3, it is said \"If more performance models are proposed for the task, the Ask4Help policy will accordingly adjust the amount of expert help requested...\". That sentence seems to imply that this will happen at inference time. However, it is my understanding that both the ask4help policy and the Success Prediction Model will need to be retrained to accommodate for the likely different embodied agent's belief $b_t$.\n\n\n**Originality, quality, clarity, and significance**\n\nThis work shares many similarities with active learning but takes place in interactive environments. The proposed approach is a novel combination of existing techniques applied together to tackle the important research problem of deciding when to ask for help. I could see future work building on this to integrate better with human workflow (e.g., using language to ask questions and interpret the answer similar to [Asking for Knowledge, Liu et al., ICML2022]). I found the paper well-written and well-organized. Figure 2 helped me understand the model overall but it was not clear how the Success Prediction Model was getting any training signal since the gradients were blocked. I realized later it is pretrained and frozen. The submission seems technically sound to me except for how the different reward configurations were defined during training.\n\nOverall, I tend to recommend this paper for acceptance because of the research problem being addressed and the proposed solution that doesn't require training an embodied agent from scratch. I might have missed some flaws, especially with respect to active learning. - In the RoomR environment, did the authors investigate what type of action (navigation vs. interaction) provided by the expert was the most common?\n- What reward configuration is being used during the evaluation?\n- What is the distribution of the reward configurations?\n- Are the loop-up embeddings for the reward configuration learned?\n- Are the authors going to release the code and trained ask4help policy so the community can reproduce the results? The authors did mention two main limitations of the proposed approach. At the moment, expert feedback is not used to improve the embodied agent which can be annoying to a human user. The second limitation of this work is the authors used proxy experts (available for the tested environments) to train the ask4help policy. Using human-in-the-loop for training is not explored in this work (it is still an active area of research).\n",
" This is a strong paper that examines the question of how to enrich existing policies for embodied tasks (in this case, object navigation and rearrangement) with the ability to “ask for help.” Unlike prior work that defines heuristics for model uncertainty, requires extra supervision in terms of language or subtasks, or requires retraining the base embodied agent policy, the proposed approach — Ask4Help — learns a base policy-agnostic approach for learning when to ask for help, *without the need to retrain the base policy*. Formalized as a separate policy that takes in a set of user preferences, the “belief” (hidden state) of the embodied base policy, and a small number of interactive rollouts in known environments, Ask4Help is able to learn to trade-off task success with “expert load” (amount of queries to the expert) with minimal data, and with remarkable results on two benchmark tasks — RoboTHOR object navigation and AI2-THOR room rearrangement.\n\nThis paper further goes above and beyond to show how the proposed system compares to scenarios where users have different preferences (weightings on success rate vs. desired query load), comparisons to ablations with fixed probabilities of picking “expert” actions, comparison of querying actual (vs. synthetic) humans, as well as the robustness of Ask4Help in the presence of noisy experts. This paper is original and significant, proposing a system that not only makes sense and seems necessary as we build stronger, more powerful embodied systems. The evaluation is incredibly convincing, and the ablations are well chosen and are crucial in showing the efficacy of the proposed approach.\n\nThe clarity of the paper is also an added bonus — the motivating examples were clear and helped ground out the early parts of the paper, the approach is simple yet flexible, and in general, led to a strong paper.\n\nThe sole weakness is that I wish the user preference component had a little bit more discussion; it’s still not clear to me how this component interacts with the rest of the system (and especially the training/reward function), nor how the various “preference embeddings” are specified/chosen.\n - How are the reward preferences chosen? Is it right now just a weight on the ultimate success rate? \n- Is there a way to regress more versatile/natural preferences from users online?\n This paper is very transparent about its limitations — namely that this Ask4Help approach only augments an existing policy with the ability to ask for help, not necessarily learn from the provided expert feedback.\n\nI agree that this is probably out of scope for this work, but for future work I’d suggest the authors look into frameworks like Lazy or ThriftyDAgger as ways to bootstrap systems that (1) know when to ask for expert help, and (2) learn/update policies based on that information.\n",
" This paper introduces a method, Ask4Help, for incorporating expert knowledge for embodied AI. The authors leveraged a pre-trained embodied AI agent and designed a policy to switch between agents' prediction and expert action as the final acting policy. The switching policy is trained by RL given the reward from both success rate and the portion of expert knowledge used in the episode. The resulting human-in-the-loop policy achieves performance improvement on several common embodied AI challenges. [+] The problem of incorporating expert knowledge and collaborating with humans is becoming increasingly important over the past few years, especially with more agents showing strong potential for indoor tasks. This makes the main topic of this paper important and meaningful.\n\n[+] The overall writing of this paper is clear and illustrative with ideas, methods, and results clearly stated. The authors showed significant performance improvements on several embodied AI challenges by augmenting a pre-trained embodied AI agent with expert knowledge. This shows strong potential for the pipeline of prompting large-scale pre-trained agents with expert knowledge, especially with recent trends on language understanding (e.g. few-shot capabilities of GPT-3).\n\n[-] The major concern of this paper comes with its design and evaluation metrics. The authors stated in the limitation section that the proposed model possesses the disadvantage that it can not learn from the expert's feedback. This, in my opinion, is a critical issue since it always requires expert knowledge to perform well and fails to leverage extra in-context expert knowledge. As the current policy only switches between the pre-trained agent's action and the expert's action, I don't see a way to make the current system a self-adaptive one when put into a new training scene. This makes the whole point of designing such a policy for querying expert knowledge questionable.\n\n[-] Following the previous point, I do think the current experimental settings and results are not convincing or have little impact. The authors used RL to train the Ask4Help policy for deciding which action to use (expert's or the model's) and seem to provide full accessibility of expert knowledge when generating the final scores (since the model can choose the amount of expert knowledge to use), this makes the current results hard to interpret as the model can always learn to use more expert knowledge without a limit. In this case, a proper comparison in my opinion should be constrained under the amount of expert knowledge available and over the performance gain of applying each method. Next, the baseline models are also designed to follow the same pipeline of policy switching without baselines from previous works that leverage the same amount of data for imitation learning or refinement. 1. As mentioned previously in the Weakness section, the authors should clarify the motivation of designing a policy that directly adopts expert knowledge without learning from it, especially on how such a policy would benefit us on more embodied AI challenges without that much expert knowledge.\n2. The authors should discuss the design of experimental settings and baseline methods: why not set a constraint of expert knowledge available? why not compare with prior methods that refine models with human expert knowledge under the same constraint? how is the current result significant since it is actually provided with all expert knowledge since the model learns to select between using it and not using it automatically?\n3. Model-wise, the motivation for designing the Success Prediction Model (SPM) is somewhat duplicative to me since it is in-essence doing the same job of predicting which action to use (e.g. the expert's when success prediction is low, the model's when success prediction is high). I hope the authors could discuss if the SPM model is substitutable by the reward received by Ask4Help since it is trained offline with ground truth successful and failure trails.\n4. In figure 3, why is the upper bound of results, not 100%, I guess this is related to the task but hope the authors would clarify. The authors have partially stated their limitations on continual learning. However, I think there is still more work to do to make the current experimental design solid and sound. I hope the authors would consider the facts stated previously in Weakness and Questions, especially on the significance of the results since it now needs all expert knowledge during training and adopts partial of them during acting. This setting, to me, is not reasonable and might need better adjustment.",
" The paper introduces Ask4Help, a method for augmenting an existing policy with the ability to fall back to an expert policy during an episode. This is achieved without retraining the existing pre-trained agent by introducing a meta-controller that will select whether to follow the agent or the expert at every timestep. The meta-controller does not receive raw observations, but the agent's belief state, a prediction of the agent's success rate, and an embedding informing it about the user's preference (i.e. how costly it is to ask for help). The proposed method is evaluated on two tasks, namely object navigation (RoboTHOR) and room rearrangement (iTHOR), where it greatly boosts the success rate of the pre-trained agent while comparing favorably to other baselines in the amount of expert usage. **Strenghts**\n\n- The problem of providing agents with the ability to ask for help is an important one.\n- The paper is easy to follow.\n\n\n**Weaknesses**\n\n- The method itself is not novel, as it can be seen as a Hierarchical RL with two low-level policies: the pre-trained agent and the expert.\n- While the quantitative results look strong, the videos in the supplementary material show that the discovered behavior is extremely simple. During a first phase the meta-controller selects the agent, which does not seem to know how to solve the task and simply roams around the room. Then, it selects the expert for a few timesteps -- which finds the object and solves the task. This results in high success rate (thanks to the hard-coded expert) but low expert usage (which is diluded due to the long initial phase where the agent is used). It would be helpful if authors could provide some statistics about expert usage aggregated over all tasks (e.g. detailed stats about the timesteps in which the expert is used within an episode, aggregated over all episodes).\n- In light of the aforementioned qualitative results, I am not convinced that the baselines are adequate. There are some additional baselines that would help understanding whether the Ask4Help policy is learning something trivial or not:\n - Replacing the pre-trained agent with a random and/or no-op policy. This would provide insight about whether it is really learning to combine the pre-trained agent and the expert, or whether the observed gains just come from the fact that the expert will always solve the task when given enough time.\n - Replacing the meta-controller with a hard-coded policy that selects the agent for M steps and then runs the expert for N steps (where both M and N should be swept over).\n- The Ask4Help policy has a non-standard architecture, e.g. it takes the predicted success rate as input. There are no ablation studies in the paper (nor the supplementary material), which make it difficult to understand whether this is an important component of the agent or could be removed.\n- The dataset split described in L208 is non-standard in machine learning. My understanding is that the purpose of the split is to deploy the agent on a set of tasks it has never seen before and where it might not perform well. However, why isn't there a third, truly held-out, set ot tasks where one can evaluate whether the Ask4Help policy generalizes? Otherwise, if one is allowed to train and evaluate on the same set of tasks, why can't we re-train the agent on the validation set instead of using Ask4Help? **Major**\n\nI have described my main questions and concerns in the previous section, namely:\n- Stats about expert usage\n- Additional baselines\n- Importance of the different design choices in the Ask4Help architecture\n- Dataset split\n\n\n**Minor**\n\n- After following the expert, the agent could find itself in states where it is out of distribution. This is a drawback of freezing the pre-trained agent that is not discussed in the manuscript, and a discussion about this would benefit the paper.\n- Section 3.3 describes how Ask4Help can be trained with multiple user preferences. However, Section 4 describes a single reward function. Could you please explain how this is done, and provide examples showing how this affects the success rate and expert usage?\n- Given the connections with Hierarchical RL, I would strongly recommend extending the Related Work section to include an overview of the field. Yes."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
8,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4,
4
] | [
"tMcswrMk0Gb",
"sL-0zxzb5tT",
"8aVwUwqqNxL",
"CmTCm7EeCk",
"mjdB5LyCNaE",
"pkHnXdaX6_C",
"ojIdQu2Vz-1",
"wdfdDPaECxzq",
"FbsxKgWxL8",
"tIaCYCU5AtD",
"ShwB63hMO8L",
"kUQGOPIQkV9",
"kUQGOPIQkV9",
"tzew69pMfGP",
"IuolXUqfaKp",
"xcBWyJZdKcT",
"nips_2022__bqtjfpj8h",
"nips_2022__bqtjfpj8h",
"nips_2022__bqtjfpj8h",
"nips_2022__bqtjfpj8h",
"nips_2022__bqtjfpj8h",
"nips_2022__bqtjfpj8h"
] |
nips_2022_WV1ZXTH0OIn | Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization | Optimizing expensive-to-evaluate black-box functions of discrete (and potentially continuous) design parameters is a ubiquitous problem in scientific and engineering applications. Bayesian optimization (BO) is a popular, sample-efficient method that leverages a probabilistic surrogate model and an acquisition function (AF) to select promising designs to evaluate. However, maximizing the AF over mixed or high-cardinality discrete search spaces is challenging standard gradient-based methods cannot be used directly or evaluating the AF at every point in the search space would be computationally prohibitive. To address this issue, we propose using probabilistic reparameterization (PR). Instead of directly optimizing the AF over the search space containing discrete parameters, we instead maximize the expectation of the AF over a probability distribution defined by continuous parameters. We prove that under suitable reparameterizations, the BO policy that maximizes the probabilistic objective is the same as that which maximizes the AF, and therefore, PR enjoys the same regret bounds as the original BO policy using the underlying AF. Moreover, our approach provably converges to a stationary point of the probabilistic objective under gradient ascent using scalable, unbiased estimators of both the probabilistic objective and its gradient. Therefore, as the number of starting points and gradient steps increase, our approach will recover of a maximizer of the AF (an often-neglected requisite for commonly used BO regret bounds). We validate our approach empirically and demonstrate state-of-the-art optimization performance on a wide range of real-world applications. PR is complementary to (and benefits) recent work and naturally generalizes to settings with multiple objectives and black-box constraints. | Accept | This paper studies a Bayesian optimization method where some of the variables are discrete and some are continuous and proposes using probabilistic reparameterization. Reviewers unanimously agreed that the paper is well-written, solves an important real-world problem, and most reviewers found the experimental results to be convincing. In addition, the reviewers found the rebuttal and revision to be convincing and clarified many of the initial questions.
Several reviewers pointed out that some of the results referenced in the paper could be more explicitly referenced in the paper and/or explained in the appendix. For the final version, please make an effort to address reviewers feedback to make the paper more self-contained. | train | [
"qzP9cvrE37S",
"NktDD8BTpS",
"cCsmcnMgdNK",
"3uQvEtkpX7_",
"nOyZcnxTT-",
"VtSr60zwPoo",
"2prvGHR6JxM",
"kUAYcaC_th",
"7rz0tyJR2mZ",
"zQJOB5qENOr",
"WY4U_RqEApY",
"ipIQNMYZPE",
"-0qma4BT1q3",
"RPsAHsgpRLz",
"uHmDFpjEzFD",
"KUS7mc8l4cr",
"vyEpCV7amUF",
"eWy3IUAUs9U",
"zr_wTevD75",
"s8C0wPVZzb8",
"aed2z6ofCU",
"KeM7YvVl1gs",
"PyZBBTkyhDF"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" - Thank you. I understand better the flaw I thought is not present. I will raise my score.\n\n- I also checked your experiments in F.1 and there is some evaluation to the number of MC samples. So indeed you performed this check, which is nice. This makes it well-rounded practical evaluation. \n\n- I have to say I am not fond of the Theorems 1 and 2, since they are folklore and you should either cite them as *direct* corollaries or not state at all. I do not think these are statements worth calling theorems. Also in Thm.2, Robbins-Monro conditions require not only the square of the stepsizes to be finite but also the sum of step-sizes to diverge - so you are missing one of the conditions to the best of my knowledge, but well this is easily fixable so I am not reducing points for this. \n\nMy belief is still that this has decent practical benchmark but this reparametrization trick is not very deep result or insight. I would really love to see this subsampling benchmark where one subsamples the search space randomly iid, with the same number of MC \\times iteration numbers and compare to that. I hardly doubt it would be competitive. I think there is no free lunch here. If you have a good reason, not anectodal like the experiments performed why this is always better, I am all ears. Anyway, I think there is no harm in accepting this paper. \n\n",
" Thanks for the additional explanations about the reformulation. The provided examples clarify how the formulation results in a different problem compared to relaxation strategies. I will update my rating accordingly. Regarding MC sampling, the results presented indeed show that good results can be attained on the problems considered in this paper with relatively few samples, but this paper could be strengthened with additional theory that this holds true for the general case (or under a certain set of assumptions). ",
" Dear mBPd,\n\nWe hope you are well. We have clarified all misunderstandings (including a crucial aspect of our work) and addressed your concerns. Do you have any remaining concerns after reading our response? We believe that your score does not reflect your review, especially given our rebuttal, and we kindly request that you consider increasing your score.\n\n Thank you!",
" Dear reviewer,\n\nWe appreciate your engagement, but there is clearly a misunderstanding here. Thanks in advance for working together to try to resolve this gap with us. \n\n> In order to evaluate (1), we need to sum over 2^M variables.\n\nWe need to sum over 2^M terms, not variables. Theorem 1 proves that the maximizers of PR are consistent with those of the original AF when all combinations can be enumerated and we show in Theorem 2 that our MC estimator is unbiased. Our method is devised precisely for the case where not all terms can be enumerated, and we empirically show that this MC estimator converges rapidly and produces SoTA results in practice.\n\n> In reasonable application such as M=50 as you mention above 2^M is really not feasible. So you resort to MC sampling and sample N configurations to construct a gradient w.r.t theta using the \"log trick\" or \"policy gradients.\" \n\nA main contribution of the paper is to show that this gradient of the probabilistic objective w.r.t. theta, while analytically infeasible to compute, can be estimated via MC the likelihood ratio method (a.k.a REINFORCE). We have provided direct empirical evidence for this in the original submission (Appendix F.1), in our [responses](https://openreview.net/forum?id=WV1ZXTH0OIn¬eId=7rz0tyJR2mZ) / updated manuscript\n\n> The same gradient would arise if I define my function just on point estimates.\n\nWe don’t understand this. Can you please clarify? \n\n> Using Theorem 1, which I find obvious, one can show actually that, if the samples $\\tilde{z}_i$ are always the same in each round of optimization you are identifying just the best configuration out of effectively the same as if you subsampled your discrete space. \n\nThis is incorrect. The samples $\\tilde{z}_i$ are not the same in each round of optimization.\n\nBased on your comment “I think you approach could work and would be different to subsampling discrete sample if at every step you chose a different random $\\tilde{z}_i$. Then this would reduce to a non-convex stochastic optimization problem like encountered with SGD on neural nets”, you seem to understand the idea of our method under resampling the samples $\\tilde{z}_i$ at each step and using stochastic optimization. **Now, crucially, the $\\tilde{z}_i$ are different in each iteration**. However, we don’t use random resampling + SGD (this would also be an option), but instead use an sample average approximation approach (otherwise known as the “method of common random numbers”) to render the optimization deterministic (conditional on a single draw of random base samples). \n\nYou are correct that we fix **some** random variables to reduce the variance of our MC estimators. However, **WE DO NOT FIX the samples $\\tilde{z}_i$.** These samples vary as $\\theta$ varies during the optimization! This is of critical importance. Instead of fixing the samples $\\tilde{z}_i$ during the optimization, we fix *uniform base samples* using the reparameterization in Table 4. This is described in Appendix C.4. Although these base samples are fixed, the samples $\\tilde{z}$ will indeed vary as $\\theta$ varies. Is this clear? This point is very important. A similar techinique is used for variance reduction in Sec 4 of [1] and is the standard way of optimizing MC acquisition functions in [BoTorch] (https://github.com/pytorch/botorch) under a different reparameterization of base samples. \n\n> This is exactly the point identified by the reviewer mBPd. In appendix C.4, you actually say that you always use the same \\tilde{z}_i or another way to keep the same stochasticity fixed. \n\nThis is a misunderstanding. We fix the uniform base samples, not the samples $\\tilde{z}_i$. This means that $\\tilde{z}_i$ can (and does) vary while optimizing the probabilistic objective. Reviewer mBPd appears to also have an incorrect interpretation of our results, which we clarified in [our response](https://openreview.net/forum?id=WV1ZXTH0OIn¬eId=vyEpCV7amUF)\n\nWe’d also like to note that SAA with L-BFGS-B outperforms the stochastic gradient ascent approach (Appendix K), and is free from hyperparameters such as learning rates or momentum.\n\n**Finally, in your initial review, you noted:**\n\n> But I doubt that doing MC sampling is addressed sufficiently in this work. Is it enough to do 1024 MC samples represent the expectation? How did you pick this number? Do you have some evidence/explanation for this? This is the crucial contribution of a work with this approach. Showing this is enough and why. If you were to show this then this is a good work.\n\nDid our prior responses and additional analysis address this question for you? \n\n[1] Balandat, M., et. al. (2020). BoTorch: a framework for efficient Monte-Carlo Bayesian optimization. Advances in neural information processing systems, 33, 21524-21538.\n",
" So let me take the example of M binary random variables from line 110. Then using (1) we construct the reparametrization. There are exactly as many theta variables (M) as there are binary random variables. In order to evaluate (1), we need to sum over 2^M variables. \n\nIn reasonable application such as M=50 as you mention above 2^M is really not feasible. So you resort to MC sampling and sample N configurations $\\tilde{z}_i$ to construct a gradient w.r.t theta using the \"log trick\" or \"policy gradients.\" The same gradient would arise if I define my function just on point estimates. \n\nUsing Theorem 1, which I find obvious, one can show actually that, if the samples $\\tilde{z}_i$ are always the same in each round of optimization you are identifying just the best configuration out of $\\tilde{z}_i$ effectively the same as if you subsampled your discrete space. This is exactly the point identified by the reviewer mBPd. In appendix C.4, you actually say that you always use the same $\\tilde{z}_i$ or another way to keep the same stochasticity fixed. I think you approach could work and would be different to subsampling discrete sample if at every step you chose a different random $\\tilde{z}_i$. Then this would reduce to a non-convex stochastic optimization problem like encountered with SGD on neural nets. ",
" > How is your MC scheme different to just picking 1024 (or MC numbers) random configuration and calculating the best out of those? \n\nIn some BO papers, acquisition functions are optimized over a discrete set of inputs. E.g. sample some finite set of values $X$ (a sample from the whole search space) and the next point is selected by maximizing the acquisition function over this discrete set. This is not what we are doing. Our approach is completely and fundamentally different. Please let us know if this is confusing and we will elaborate further.\n\n> Aren't you effectively randomly subsampling configurations and picking the best among those? Why is this probabilistic interpretation needed?\n\nRather than sampling some discrete configurations $\\boldsymbol z$ from the search space and taking the best, we have reparameterized the problem by using a random variable $\\boldsymbol Z$ governed by $\\boldsymbol \\theta$. With 1024 MC samples, we sample 1024 configurations of $\\boldsymbol Z$ from the distribution defined by $\\boldsymbol \\theta$ at each step of optimizing the probabilistic objective—many such steps occur to find the optimal $\\boldsymbol \\theta$. *We optimize $\\boldsymbol \\theta$ with gradients, so $\\boldsymbol \\theta$ changes during the continuous optimization and thus the samples of $Z$ also change.* Note that this is fundamentally different from sampling configurations $\\boldsymbol z$ from the search space and taking the best.\n\nDoes this answer your question? Did our other responses in this thread and in https://openreview.net/forum?id=WV1ZXTH0OIn¬eId=RPsAHsgpRLz you address your other concerns?\n\nThanks!\n",
" Thank you for addressing my minor concerns. My score remains the same. ",
" How is your MC scheme different to just picking 1024 (or MC numbers) random configuration and calculating the best out of those? Aren't you effectively randomly subsampling configurations and picking the best among those? Why this probabilistic interpretation is needed?\n",
" > But I doubt that doing MC sampling is addressed sufficiently in this work. Is it enough to do 1024 MC samples to represent the expectation? \n\nYou may have missed our initial response regarding this question in the [2nd part (2 / 3)](https://openreview.net/forum?id=WV1ZXTH0OIn¬eId=RPsAHsgpRLz) of our response. We performed a sensitivity analysis with respect the number of MC samples where we find that 128 works about as well as 1024 MC samples and **both perform comparably with computing PR analytically on the two small problems where that is feasible**. *The results are shown in Appendix F.1 (Figures 5, 6, 7, and 8).*\n\nTo directly answer the question “Is it enough to do 1024 MC samples to represent the expectation?” We examine the MC approximation error relative to analytic PR on the chemistry (192 discrete configurations) and ackley problems (1024 discrete configurations). We have added the results in Figure 26 in Appendix P. These results report the mean absolute percentage error (MAPE) \n\n$\\frac{100 }{|X_\\text{discrete}|} \\cdot \\sum_{\\boldsymbol x \\in X_\\text{discrete},\\boldsymbol \\theta \\in \\Theta_\\text{discrete}} \\frac{ \\mathbb E_{\\boldsymbol Z \\sim p(\\boldsymbol Z |\\boldsymbol \\theta)} \\alpha(\\boldsymbol x,\\boldsymbol Z) - \\frac{1}{N}\\sum_{i=1}^N \\alpha(\\boldsymbol x, \\boldsymbol z_i)}{\\max_{\\boldsymbol x \\in X_\\text{discrete},\\boldsymbol \\theta \\in \\Theta_\\text{discrete}} \\mathbb E_{\\boldsymbol Z\\sim p(\\boldsymbol Z |\\boldsymbol \\theta)} \\alpha(\\boldsymbol x,\\boldsymbol Z)}$\n\nevaluated over a random set of 10,000 points from $\\mathcal X \\times \\mathcal \\Theta$ (the sampled sets are denoted $X_\\text{discrete}, \\Theta_\\text{discrete}$), where $\\boldsymbol z_i$ are samples from $p(\\boldsymbol Z |\\boldsymbol \\theta)$. We observe a rapid reduction in MAPE as we increase the number of samples. With 1024 samples, MAPE is 0.055\\% (+/- 0.0002 \\%) over 20 replications (different MC base samples in PR) on the chemical reaction problem and MAPE is 0.018\\% (+/- 0.0003 \\%) on the Ackley problem. With 128 samples, MAPE is 0.282\\% (+/- 0.0029 \\%) over 20 replications (different MC base samples in PR) on the chemical reaction problem and MAPE is 0.052\\% (+/- 0.0021 \\%) on the ackley problem. We find in our closed loop optimization experiments that these error rates are small enough to achieve SoTA BayesOpt performance, while still remaining computational feasible.\n\n> How did you pick this number? Do you have some evidence/explanation for this?\n\nWe initially just chose 1024 because our intuition was that a large number of MC samples would be needed, but in a sensitivity analysis we performed a few days prior to our submission (the one reported in Appendix F.1), we found that smaller numbers of MC samples, including 128 samples, does not degrade optimization performance, even on the problems with search spaces with high dimensionality and many discrete configuration (such as SVM). See detailed results in Appendix F.1.\n\n> This is the crucial contribution of a work with this approach. Showing this is enough and why. If you were to show this then this is a good work.\n\nWe believe we have provided strong evidence in response (2/3) and above regarding the approximation error of MC sampling. Please let us know if there are any additional experiments or explanations to convince you further.\n\nAnother interesting data point is regarding the rank order of MC vs analytic PR under sample average approximation. For example, if analytic PR has higher value for some $\\boldsymbol x_1, \\boldsymbol \\theta_1$ than a different $\\boldsymbol x_2, \\boldsymbol \\theta_2$, what is the probability that the MC estimator of PR gives higher value to the better design according to analytic PR? Using 10,000 random pairs of points from $\\mathcal X \\times \\Theta$, we find that the MC estimator of PR gives higher value to the $\\boldsymbol x_1$, $\\boldsymbol \\theta_1$ with probability 0.92.\n",
" I do not doubt that problems with a lot of discrete parameter exist and are relevant. But I doubt that doing MC sampling is addressed sufficiently in this work. Is it enough to do 1024 MC samples represent the expectation? How did you pick this number? Do you have some evidence/explanation for this? This is the crucial contribution of a work with this approach. Showing this is enough and why. If you were to show this then this is a good work. The idea in this paper is very simple and not groundbreaking but justification why this should work is of substance. \n\nI have a feeling one could come up with a counterexample, where reaching global optimum with such method with be very improbable.",
" Thank you for your comment!\n\nAfter reading your comment and the revision, I was able to understand your work much clearer.\n\nI will increase the score slightly.",
" Thank you for your thorough review! We address your specific comments below.\n\n> In the middle and right panels of Figure 1, what does the color represent? I think it's the AF value, but in that case, why is it so different between the two panels?\n\nCorrect. The reason is that a continuous relaxation over-estimates the AF value. The AF value at infeasible continuous points (red) is much higher than the best AF value at any feasible discrete point (white).\n\n> The experiments would be stronger with comparisons to non-BO black-box optimization methods, such as evolutionary algorithms or to methods that use, e.g., a VAE to embed the discrete variables into a continuous space.\n\nWe have added Nevergrad’s recommended (Rapin et al., 2018) evolutionary strategy for mixed/discrete spaces (PortfolioDiscreteOnePlusOne) to our unconstrained single objective problems and provided the results in Appendix M in Figures 21 and 22. We observe that PR vastly outperforms Nevergrad on all test problems. We are happy to include alternative ES baselines if you have suggestions for better baselines.\n\nRegarding embeddings into a continuous space via VAEs: our method is compatible with any type of kernel, including embedding-based approaches. For example, in Appendix H, we consider the embedding method of Zhang et al., Technometrics 2019. There are a large variety of VAE methods that could be used for embedding the entire search space, each of which generally involves a problem-dependent kernel choice (see e.g., Kusner et al., ICML 2017, Dai et al., ICLR 2018, Stanton et al., ICML 2022). With PR, one could use distances in the latent space, but optimize in the original discrete space. This would avoid the VAE-BO issue of obtaining continuous latent designs from continuous optimization in the latent space that do not map to any discrete designs (the focus of many works—e.g. Griffiths et al., Chem Sci 2020 and Notin et al., NeurIPS 2021). Further work in this area is beyond the scope of this work.\n\nKusner et al., ICML 2017. Grammar variational autoencoder.\n\nDai et al., ICLR 2018. Syntax-directed variational autoencoder for structured data.\n\nGriffiths et al., Chem Sci 2020. Constrained Bayesian optimization for automatic chemical design using variational autoencoders. \n\nNotin et al., NeurIPS 2021. Improving black-box optimization in VAE latent space using decoder uncertainty.\n\nRapin et al., Github 2018. Nevergrad - A gradient-free optimization platform.\n\nStanton et al, ICML 2022: Accelerating Bayesian Optimization for Biological Sequence Design with Denoising Autoencoders\n\n> I understand that there is limited space, but section 4.3 is very difficult to understand without a lot of prior knowledge.\n\nThank you for pointing this out. We have added a more detailed description in Appendix C.4 in our revision. We can move this to the main text in the camera ready when we are allowed an extra page, if you suggest.",
" > Why does Ackley PR-TR perform better?\n\nPR-TR actually performs worse than PR on Ackley. Can you clarify your question please? \n\nIn some problems such as cellular network, trust regions methods (e.g. Casmopolitan and PR-TR) are very good at zooming in on promising regions in the search space and perform better than non-trust region methods. In other cases where a GP is suitable surrogate model for the global landscape, non-trust region methods (e.g. standard BO) can outperform trust region methods like PR-TR such as on this ackley problem (this is also observed in Daulton et al., UAI 2022 in Figure 9).\n\nDaulton et al., UAI 2022. Multi-Objective Bayesian Optimization over High-Dimensional Search Spaces",
" > Effect of MC sampling is not investigated in the experiments and in the motivating Figure.\n\n*This is incorrect*. In the motivating figure, MC sampling is used. As we mention in the second sentence of the Experiments Section (Sec. 6), “We use $N=1024$ MC samples in our experiments and demonstrate that PR is robust with respect to the number of MC samples (and compare against analytic PR, where computationally feasible) in Appendix F.”\n\nIn particular, Appendix F.1 (Figures 5 and 6), includes a comparison of PR with varying numbers of MC samples against analytic PR on the chemical reaction and ackley problems, where computing analytic PR was feasible. We find that MC PR performs comparably with its analytic counterpart. We do not find statistically significant differences in optimization performance between 128, 256, 512, and 1024 samples vs analytic, but using 128 samples is approximately 8x faster than the 1024 samples used in our paper. For your convenience, we have provided the main experiments using PR and PR+TR with 128 MC samples in Figures 9 and 10 in Appendix F.1 in our revision. In our revision, we have added comparisons with 8,16,32, and 64 MC samples in Figures 7 and 8 in appendix F.1. We do see performance degradation with 64 or fewer samples, but the wall time is also much smaller. \n\n> Can in any of these experiments exhaustive search be performed? It would be nice to understand the effect of MC sampler.\n\nSee our response above regarding comparisons with MC and analytic PR. \n\n> The paper cites a lot of by now folclore theorems, such as unbiasedness of monte carlo, SGD convergence and regret rates of algorithms depending on AF. I think the paper could be explained in 1 page and experiments take 2 pages, exactly suitable for a workshop paper.\n\nWe respectfully disagree with you here. We include these results to be rigorous in our both empirical and theoretical evaluation. Theoretical results around the convergence of our estimators, as well as regret bounds are a component that many members of the community find valuable: similar theoretical results are key contributions of many other BO papers published at NeurIPS and ICML. Our main theoretical result (Theorem 1) is critical for showing that optimizing the probabilistic objective results in the same designs as the original mixed/discrete acquisition function optimization problem. This is valuable because therefore using PR yields the same BO policy as using the raw acquisition function, if optimized well. \n\n> In Fig. 1, acquisition function evaluates the expectation exactly? I would not expect there to be no spread if used with monte-carlo sampling.\n\nIn Figure 1, Monte Carlo sampling is used. What spread are you referring to? If you are referring to the spread in final AF value, optimizing the probabilistic objective is using different fixed random base samples in each replication, so the optimization is not the same in each replication.\n\n> Why not use Camsopolitan and HyBO on more realistic benchmarks? Could you not just add the constraint in the AF selection?\n\nThis is described in Sec 6.3. Both methods run on all problems (including real world problems like SVM, chemical reaction, cellular network) except welded beam (neither supports constraints in the current implementation) and oil sorbent (neither support multiple objectives). On Cellular Network, our submission included partial results from Casmopolitan on the cellular network problem, but Casmopolitan failed with numerical issues; we believe this may have to do with the trust region getting too small and Casmopolitan evaluating the same design repeatedly leading to ill-conditioned covariance matrices. Hence, complete results are not available for Casmopolitan. Similarly, HyBO also had ill-conditioned covariance matrix issues on the cellular network problem, so only partial results are included in our submission. On SVM (53d), HyBO is very slow because it scales poorly with the input dimension, and so we terminated HyBO reported partial results.\n\n> The best performing algorithm PR+TR is not mentioned in the main text.\n\nWe do introduce PR+TR in the main text in Section 6. We will move a clearer description of the method (provided in the submission in Appendix C.2) to the main text in the camera ready version, when we are allowed an additional page.\n",
" Thank you EFXz for your comments. We are surprised that you believe our contribution is poor (score of 1), despite acknowledging that “Optimization of AF is indeed an elephant in the room, nice to see that it is being recognized as a problem”. We are glad that you recognize that our method is practical for the common scenarios that we target in this work.\n\n> The problem with it is that for discrete spaces its complexity is larger than evaluating the whole [discrete search space] $\\mathcal Z$ once.\n\nThis depends entirely on the number of discrete configurations and number AF evaluations that PR uses during optimization. We find that PR is much faster than enumerating all discrete options on Welded beam (>370M discrete configurations), which has the only fully discrete search space of the problems in the paper. PR optimization (with 1,024 MC samples) of the PO took 250.9 seconds on average across 10 replications, whereas optimizing the AF by enumerating all discrete options took 1349.3 seconds. \n\nNevertheless, in the setting where the AF can be evaluated at every discrete option in a reasonable amount of time, we would recommend to use enumeration rather than PR, since the black-box function is typically expensive to evaluate and time spent optimizing the AF is often comparably cheap. It is worth noting that enumeration is infeasible in many practical scenarios (more below) and on most problems in the paper. We can make this clearer in the discussion section. \n\n> In mixed spaces, there are conditions where [PR] could be better, but how much better is not addressed in this paper.\n\nThank you for raising this. We have added results that include a new baseline method that enumerates all discrete configurations and optimizes the continuous parameters for each fixed discrete configuration to directly answer your question of “how much better”. It is only feasible to run this on chemistry with 192 discrete configurations and Ackley with 1024 discrete configurations. On both problems, we find that PR achieves the same optimizer as the enumeration baseline in a much shorter period of time (as shown in Appendix N in Figures 23 and 24 in our revision). \n\n> The whole approach is sensible only if one of these two conditions are met: a) mixed space. Let me denote the effort to optimize $\\mathcal{X}$ given a specific $z$ to $B$. If $B\\cdot|\\mathcal Z|$ is less than optimizing $\\mathcal{Z}$, $\\mathcal{X}$ jointly with the Monte-Carlo or exact calculation b) discrete space, but very efficient evaluation of MC sampling exists\n\nYes, thank you for noting this. The precise scenarios that our method is designed for are (a) mixed search spaces, (b) large discrete search spaces where enumeration would be extremely slow or infeasible. Note that the latter can occur rapidly as the number of discrete factors increases.\n\n> My conjecture is that b) does not occur in practice much\n\nIn the real world, it is not uncommon to have expensive-to-evaluate combinatorial optimization problems with more factors than can be exhaustively evaluated. For example, consider the testing of compiler or database configuration changes. There are hundreds of parameters that can be tuned, and to test such changes out in production, time-intensive experiments must be conducted to understand the effects of these changes on metrics like CPU usage, memory usage, or latency [1]. In machine learning, models may contain dozens to hundreds of features, which might affect the runtime of algorithms (due to data retrieval or computation), in addition to the model loss [2]. The SVM feature selection problem is one toy example of this and includes 50 features ($2^{50}$ discrete configurations).\n\n[1] Design and Analysis of Benchmarking Experiments for Distributed Internet Services. Bakshy & Frachentberg. ACM Conference on the World Wide Web 2015.\n[2] Hidden Technical Debt in Machine Learning Systems. Sculley et. al. NeurIPS 2015.",
" > While the main advantage of the proposed method is that it enables gradient based optimization, the accuracy of the gradients relies on Monte Carlo estimation, which must be evaluated over the original discretized space (eg. 6). This introduces the same curse of dimensionality that would arise from optimization over a mixed variable space (e.g., by enumeration or branch-and-bound), and, as a result, the computational results may be unfair as the proposed method is given much more information in the form of MC samples.\n\nThe MC approximation error increases with the dimension of the search space, but the cost is fixed for a fixed number of MC samples regardless of the dimension of the search space. Enumeration would scale poorly (time wise) with the dimension of the search space, whereas PR is scalable and feasible (see results on enumeration in the general discussion to all reviewers).\n\nAs to the computational results being “unfair” since the “proposed method is given much more information”, we are not sure we understand this point. All methods utilize the same number of costly evaluations of the underlying black box function. The different approaches indeed evaluate the acquisition function in different ways, which affects not only the optimization performance w.r.t the black-box optimization task, but also computational complexity and runtime of the AF. We address these tradeoffs explicitly in the paper. We hope that our response to your review and the others will also help clarify the substantive differences in runtime even more.\n\n> E.g., when 1024 MC samples are used, would it be a more fair comparison to allow the multi-start methods to have 1024 starts?\n\nIf we are considering the cost of optimizing the AF, then giving more starting points to non-PR methods would help equalize the computational budget for all methods. However, PR is not very sensitive to the number of MC samples (see Figures 5-8 in Appendix F.1), and the number of MC samples can be reduced significantly (from 1024 to 128) to improve wall time without degrading optimization performance. PR with 128 MC samples still outperforms alternatives (see Figures 9-10 in Appendix F.1).\n\nTo further provide a demonstration of AF optimization performance at a particular wall time budget, we provided additional starting points to non-PR methods (64 instead of the default 20) and compared against PR with 64 MC samples. The results in Figure 25 in Appendix O in our revision show that PR outperforms alternatives on the chemistry problem regardless of wall time budget.\n\n> Could a comparison be added to show that the proposed MC method results in more sample-efficient (in terms of optimization) performance than existing methods? \n\nPlease see our experiments in Section 6 (summarized in Fig. 2), which demonstrates that our method is more sample-efficient than other methods, in terms of optimization of the expensive-to-evaluate black-box function.",
" Thank you for your thoughtful review and questions! We are glad to hear that you found the soundness and contribution of our work to be good (scores of 3) and the presentation to be excellent (score of 4). We hope that this response will clarify crucial aspects of the paper to change your score from a borderline reject to an accept. \n\n> I am not very familiar with probabilistic reformulation, but it seems the proposed reformulation effectively inherits the weaknesses of BO on discrete inputs that it purports to avoid. For example, a binary variable is recast as a binomial distribution, which must be again \"discretized\" (in Algorithm 1) to create feasible values for $z_n$. Moreover, the secondary transformation in Table 3 introduces similar techniques used to relax discrete variables for optimization.\n\nThe proposed approach does not inherit the weakness you describe, and this is the main theoretical result of the work. In particular, Theorem 1 of our algorithm is that if an optimal $\\boldsymbol \\theta_n$ is found, then any discrete sample $\\boldsymbol z_n ~ p(\\boldsymbol Z|\\boldsymbol \\theta_n)$ is optimal and that for any optimal $\\boldsymbol z_n$ there is a $\\boldsymbol \\theta_n$---that assigns nonzero probability to $\\boldsymbol z_n$— that is optimal with respect to the probabilistic objective. Theorem 1 says that ($\\boldsymbol x_n, \\boldsymbol z_n$) is guaranteed to be optimal with respect to the acquisition function. In the case of a unique best optimizer (here we consider the case of a single discrete parameter for simplicity), $\\theta_n$ is a point mass on the best $z_n$, in which case discretize in Algorithm 1 is simply to map a one-hot categorical to a categorical representation of $[0, C-1]$. If there are multiple $z_n$ that are optimal for a given optimal $x_n$, then if the optimal discrete values are consecutive ordinals or $z_n$ is binary or categorical, then $\\theta_n$ may not be a point mass, but rather provide support exclusively over optimal values of $z_n$. If the $z_n$ is ordinal and the optimal values are not consecutive, then there could be multiple $\\theta_n$ that point masses on the different optimal $z_n$. In contrast, with a continuous relaxation, the optimal continuous relaxation $z'^*$ does not necessarily lead to an optimal $z_n$ when $z'^*$ is rounded. In our revision we have provided an updated theoretical formulation that should help clarify some of these points. \n\n> Moreover, the secondary transformation in Table 3 introduces similar techniques used to relax discrete variables for optimization.\n\nParticularly in response to this statement, the reparameterization in Table 3 is used for computational stability and is commonly used (e.g., Yin et al., 2020). Table 3 reparameterizes the parameter $\\theta$ of a discrete probability distribution, and the probability distribution still only provides support on discrete values $z$. Therefore, this reparameterization of $\\theta$ *does not* lead to the aforementioned overestimation issue with continuous relaxations of discrete parameters with AFs because the AF is only evaluating on discrete values sampled from $p(z|\\theta)$.\n\n\n",
" Thank you for your review! We are glad that you found our work to be “quite novel.” The only weakness you listed was to discuss [20,50] more thoroughly. Given that this and your question regarding bounds for the continuous relaxation were your only concerns (both of which we address below), we hope that this will resolve any ambiguity and make you consider increasing your score.\n\n> 1. In Table 1, ordinal variables have the form of continuous relaxation $z' \\in [0,C−1]$ and categorical variables have the form of continuous relaxation $z' \\in [0,1]^C$. Are they correct? \n> 2. For ordinal variables, [0,1)→0,[1,2)→1,…[C−2,C−1)→C−2,[C−1,C−1]→C−1. Thus, C−1 has a smaller range than the other integers. I think it should be [0,C). Moreover, for categorical variables, $[0,1]^C$ should be a simplex. More precisely, a probability vector for categories has to sum to one, so I think it should be $[0,1]^C$ where a sum-to-one constraint is assumed. It is a generalization of the binary case, which implies that its continuous relaxation is [0,1].\n> 3. I am not sure if the continuous relaxations have changed, other parts are still the same. Or the current version is just equivalent to the continuous relaxations I described. If updates are required, please describe in the rebuttal.\n\nThe discretize() function (Table 1) actually makes it so that values in $[0, 0.5)$ are rounded to $0$ and values in $[C-1-0.5, C-1]$ are rounded to $C-1$. One could simply choose to use the bounds $[-0.5, C-0.5)$ as the bounds for the continuous relaxation. If you look at the code we provided, we already use $[-0.5, C-0.5)$ for the Sobol baseline and the Sobol initialization for all methods. We have updated this in our revision. For numerical optimization of the continuous relaxation, using either set of bounds should yield comparable performance. In fact, expanding the bounds seems like it would only hurt the performance of the continuous relaxation baseline since the additional space will remain unexplored.\n\nFor a continuous relaxation, the vector of values across categories is not a probability distribution and does not need to sum to 1. This is the same relaxation used in [20]. For PR, the $\\theta$ is a probability vector that does need to sum to 1. Thanks for pointing this out; we have clarified this in Table 2. We emphasize that using the transformations proposed in Table 3, $\\theta$ is always in the simplex.\n\n> some previous work such as [20, 50] should be discussed more thoroughly\n\n[20] is the work by Garrido-Merchan et al. that proposes the ExactRound method, which we discuss at the end of Section 2 (L97-L101 in the original manuscript) and in Section 5 (L255-258 in the original manuscript). In our revision, we have made it clearer in Section 2 that [20] avoids the over-estimation issue with the continuous relaxation by discretizing the continuous relaxation before optimizing the acquisition function. However, applying discretization before evaluating the acquisition function makes the acquisition function non-differentiable with respect to the continuous relaxation of the discrete parameters. Therefore, Garrido-Merchan et al. rely on using finite differences to approximate the gradients of the acquisition function, which has large flat regions across slices of the continuous relaxation. \n\n[50] Is the work by Wilson et al. on using the reparameterization trick to approximate the expectation over the GP posterior in many acquisition functions via Monte Carlo. We mention this briefly in Section 5 (L266-268 in the original manuscript). While this reparameterization trick has some similarities with our probabilistic reparameterization, [50] reparameterizes an *existing* multivariate normal random variables in terms of standard normal random variables and then proposes to use a sample-path gradient estimator. In contrast, our approach *introduces* a new probabilistic formulation using discrete probability distributions and uses likelihood-ratio-based gradient estimators since sample-path gradients cannot be computed through discrete sampling. We have added this discussion to Appendix L in our revision and we will move this to the main text in the camera ready when we are permitted a 10th page.\n",
" Dear reviewers, \n\nThank you for your time and for the thorough feedback. We appreciate that you all recognized that our work is novel and solves an “interesting” (9pbz), “challenging and important” (mBPd) topic of optimizing acquisition functions over discrete and mixed spaces in Bayesian optimization, which is common in “many real-world problems” (H7wp). As EFXz articulated, “optimization of acquisition function is indeed an elephant in the room, nice to see that it is being recognized as a problem.” \n\nWe proposed using a theoretically-grounded probabilistic reparameterization (PR) of the discrete parameters in terms of discrete random variables, governed by continuous parameters. The gradients with respect to the continuous parameters can be computed analytically or estimated via unbiased Monte Carlo estimators to enable efficient optimization of the probabilistic objective, the expectation over the introduced probability distributions. H7wp found this approach to be “performant and elegant” and remarked that “It's the kind of solution that seems obvious once written down.”\n\nOur work includes a “solid empirical analysis beyond the standards of the BO field” (EFXz). Furthermore, we proved “mathematically that the proposed reformulation does not affect the convergence guarantees of standard BO”, and we appreciate that to a reader “the importance of the theorems is easy to see” (H7wp).\n\nOur work has broad impact for this “challenging and important” problem space because probabilistic reparameterization is agnostic to the choice of acquisition function and model (we demonstrate in Appendix H and I respectively), it is compatible with black-box constraints (see Welded Beam problem, Section 6) and multiple objectives (see Oil Sorbent problem, Section 6), and it is complementary with many methods including TuRBO (Section 6) (Eriksson et al., NeurIPS 2019). \n\nUnanimously, reviewers found our writing to be clear (presentation scores of 3,3,4,4) and remarked that our method was sound (soundness scores of 3,3,3,4). And 3/4 reviewers found our contribution was significant (contribution scores of 1,3,3,4).\n\nWe have uploaded a revision of our manuscript (including the appendix) with changes colored in *red* for clarity. The manuscript includes:\n* A clearer, more general version of our theoretical results. In our revision, we provide an updated theoretical formulation that should help clarify some of the questions raised by the reviewers. The main change compared to the initial manuscript is that the results are derived for general parameterizations of probability distributions (using compactness and continuity arguments), with the specific parameterizations described in the main text becoming special cases of those results. This in particular should help reduce confusion around the role of “discretizing” the solutions obtained in the reparameterized space.\n* Comparisons with additional baseline methods: an evolutionary algorithm from NeverGrad and a baseline where discrete configurations are enumerated and continuous parameters (if any) are optimized for each discrete configuration (the gold standard, when computationally feasible)\n* A more prominent presentation of the original finding that PR sees little-to-no performance degradation and large speed-ups with 128 MC samples relative to 1,024.\n* A comparison of acquisition optimization performance showing that PR works best for any wall time budget on the chemistry problem.\n* Improvements to the text per reviewer feedback.\n",
" This paper proposes a Bayesian optimization method over mixed spaces, i.e., discrete space and continuous space, via probabilistic reparameterization. Since optimizing an acquisition is a challenging task where discrete variables exist, probabilistic reparameterization is used to allow us to optimize the acquisition function with gradient information. The authors provide the theory that the proposed method serves a consistent optimizer. Finally, they demonstrate that the proposed method works well in diverse experiments. ## Strengths\n\n+ It solves an interesting topic regarding discrete and mixed spaces.\n\n+ It proposes a solid method using probabilistic reparameterization.\n\n+ The experimental results are reasonable.\n\n## Weaknesses\n\n- I think this work is quite novel, but some previous work such as [20, 50] should be discussed more thoroughly.\n\nPlease see the text box described below. I would like to ask the authors some questions.\n\n1. In Table 1, ordinal variables have the form of continuous relaxation $z' \\in [0, C - 1]$ and categorical variables have the form of continuous relaxation $z' \\in [0, 1]^C$. Are they correct? For ordinal variables, $[0, 1) \\to 0, [1, 2) \\to 1, \\ldots [C-2, C-1) \\to C-2, [C-1, C-1] \\to C-1$. Thus, $C-1$ has a smaller range than the other integers. I think it should be $[0, C)$. Moreover, for categorical variables, $[0, 1]^C$ should be a simplex. More precisely, a probability vector for categories has to sum to one, so I think it should be $[0, 1]^{C-1}$ where a sum-to-one constraint is assumed. It is a generalization of binary case, which implies that its continuous relaxation is $[0, 1]^1$.\n\n1. I am not sure if the continuous relaxations are changed, other parts are still same. Or the current version is just equivalent to the continuous relaxations I described. If updates are required, please describe in the rebuttal. I do not think that this work has any negative societal impacts and any specific limitations.",
" This work proposes a method for optimization of acquisition functions in Bayesian optimization (BO), where at least some of the input variables are discrete (binary, integer, or categorical). The method is based on a probabilistic reformulation, whereby the discrete variables are cast as random variables, drawn from probability distributions. Optimization can then performed on the expected value of the acquisition function, using the parameters of the assumed probability distributions as inputs instead of the original (discrete) variables. This reformulation enables the use of gradient-based optimization; the authors propose a Monte Carlo approach to estimate the gradients. STRENGTHS:\n\n1. The method and significance are well presented, and the introduction and case studies show why the setting of BO over mixed variable spaces is both challenging and important.\n\n2. The computational studies demonstrating the method are very thorough, with a wide range of case studies and tests considered.\n\n3. The authors prove mathematically that the proposed reformulation does not affect the convergence guarantees of standard BO.\n\nWEAKNESSES:\n\n1. I am not very familiar with probabilistic reformulation, but it seems the proposed reformulation effectively inherits the weaknesses of BO on discrete inputs that it purports to avoid. For example, a binary variable is recast as a binomial distribution, which must be again \"discretized\" (in Algorithm 1) to create feasible values for $z_n$. Moreover, the secondary transformation in Table 3 introduces similar techniques used to relax discrete variables for optimization.\n\n2. While the main advantage of the proposed method is that it enables gradient based optimization, the accuracy of the gradients relies on Monte Carlo estimation, which must be evaluated over the original discretized space (eg. 6). This introduces the same curse of dimensionality that would arise from optimization over a mixed variable space (e.g., by enumeration or branch-and-bound), and, as a result, the computational results may be unfair as the proposed method is given much more information in the form of MC samples. Could a comparison be added to show that the proposed MC method results in more sample-efficient (in terms of optimization) performance than existing methods? E.g., when 1024 MC samples are used, would it be a more fair comparison to allow the multi-start methods to have 1024 starts?\n\nSince the proposed method effectively updates continuous parameterizations used to compute the \"discretized\" values of the original updates, is there an equivalent relaxation and iterative optimization method in the original space? Yes, provided in supplementary material.",
" The paper addresses BO in mixed decision spaces, where part of the space is continuous and part is discrete. Namely, it focuses on the problem where the discrete space is large and cannot be exhaustively evaluated. The proposed strategy reparametrize the objective with a parametrized discrete probability distribution over the discrete decision space, which can be optimized using continuous optimization tools. Further extensions with MC sampling and extensive benchmark is provided. Strengths\n----------\n- I think this is a very practical algorithm if certain conditions are met (see Limitations). Authors identify problems where these conditions are met. \n- solid empirical analysis beyond the standards of the BO field. \n- optimization of AF is indeed an elephant in the room, nice to see that it is being recognized as a problem. \n\nWeakness\n---------\n- I feel the contribution to the field does not warrant publication at the most prestigious venue. I think people were aware of the reparameterization trick in this context. The problem with it is that for discrete spaces its complexity is larger than evaluating the whole \\mZ once. In mixed spaces, there are conditions where it could be better, but how much better is not addressed in this paper. \n- The paper cites a lot of by now folclore theorems, such as unbiasedness of monte carlo, SGD convergence and regret rates of algorithms depending on AF. I think the paper could be explained in 1 page and experiments take 2 pages, exactly suitable for a workshop paper. \n- The best performing algorithm PR+TR is not mentioned in the main text.\n- Convergence of SGD on the non-convex objective is also not ideal, but I guess this is not addressed in the paper since the “discrete part” is the challenge. But its good to point out that by creating a continuous relaxation not all issues are removed. \n - In Fig. 1, AF evaluates the expectation exactly? I would not expect there to be no spread if used with monte-carlo sampling.\n- Why not use Comsopolitan and HyBO on more realistic benchmarks? Could you not just add the constraint in the AF selection?\n- Why on Ackley PR-TR performs better?\n- Can in any of these experiments exhaustive search be performed? It would be nice to understand the effect of MC sampler. \n 1. The whole approach is sensible only if one of these two conditions are met:\na) mixed space. Let me denote the effort to optimize \\mathcal{X} given a specific z to B. If B*|\\mZ| is less than optimizing \\mathcal{Z}, \\mathcal{X} jointly with the Monte-Carlo or exact calculation\nb) discrete space, but very efficient evaluation of MC sampling exists \n\nMy conjecture is that b) does not occur in practice much, and a) can be indeed practical. \n\n2. Effect of MC sampling is not investigated in the experiments and in the motivating Figure. \n\n",
" Maximizing Bayesian optimization acquisition function (AF) over mixed or high-cardinality discrete search spaces is challenging when the space is too large to enymerate directly. This is important because the theoretical regret bounds that come with certain AFs only apply if the AF can be maximized. The paper proposes to reparameterize the discrete parameters into probability distribution defined by continuous parameters, and then to maximize a probabilistic objective (PO) consisting of the expectation of the AF over those continuous parameters. The authors call this a probabilistic reparameterization (PR) of the AF. The paper then proves that maximizing this objective also maximizes the original AF, providing the same guarantees as maximizing the AF. They then derive unbiased and efficient Monte Carlo estimates of the PO and its gradients and show empirically that their method outperforms other mixed-variable optimizers on both maximizing the AF and the overall Bayesian optimization task. \n ## Strengths\n\n- The idea of inducing a distribution with continuous parameters and then optimizing the expectation of the AF over the new parameters is both performant and elegant. It's the kind of solution that seems obvious once written down.\n- The paper addresses an important limitation when applying Bayesian optimization to many (most?) real-world problems. \n- The paper is well-contextualized in relation to previous work. \n- The writing is generally clear, and the importance of the theorems is easy to see. \n- The experiments seem sound, and the conclusions are supported by the results. \n\n## Weaknesses\n- I understand that there is limited space, but section 4.3 is very difficult to understand without a lot of prior knowledge.\n- The experiments would be stronger with comparisons to non-BO black-box optimization methods, such as evolutionary algorithms or to methods that use eg a VAE to embed the discrete variables into a continuous space. \n - In the middle and right panels of Figure 1, what does the color represent? I think it's the AF value, but in that case, why is it so different between the two panels?\n The primary limitation is the computational cost of the MC estimation and maximization, which the authors address. This could be improved by explicitly stating the compute required for their experiments. "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"3uQvEtkpX7_",
"vyEpCV7amUF",
"aed2z6ofCU",
"nOyZcnxTT-",
"uHmDFpjEzFD",
"kUAYcaC_th",
"ipIQNMYZPE",
"7rz0tyJR2mZ",
"zQJOB5qENOr",
"uHmDFpjEzFD",
"eWy3IUAUs9U",
"PyZBBTkyhDF",
"RPsAHsgpRLz",
"uHmDFpjEzFD",
"KeM7YvVl1gs",
"vyEpCV7amUF",
"aed2z6ofCU",
"s8C0wPVZzb8",
"nips_2022_WV1ZXTH0OIn",
"nips_2022_WV1ZXTH0OIn",
"nips_2022_WV1ZXTH0OIn",
"nips_2022_WV1ZXTH0OIn",
"nips_2022_WV1ZXTH0OIn"
] |
nips_2022_9a1oV7UunyP | When to Update Your Model: Constrained Model-based Reinforcement Learning | Designing and analyzing model-based RL (MBRL) algorithms with guaranteed monotonic improvement has been challenging, mainly due to the interdependence between policy optimization and model learning. Existing discrepancy bounds generally ignore the impacts of model shifts, and their corresponding algorithms are prone to degrade performance by drastic model updating. In this work, we first propose a novel and general theoretical scheme for a non-decreasing performance guarantee of MBRL. Our follow-up derived bounds reveal the relationship between model shifts and performance improvement. These discoveries encourage us to formulate a constrained lower-bound optimization problem to permit the monotonicity of MBRL. A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns. Motivated by these analyses, we design a simple but effective algorithm CMLO (Constrained Model-shift Lower-bound Optimization), by introducing an event-triggered mechanism that flexibly determines when to update the model. Experiments show that CMLO surpasses other state-of-the-art methods and produces a boost when various policy optimization methods are employed. | Accept | This paper studies the relation between model shift and the performance of model-based reinforcement learning. The paper proposes a new algorithm that leads to empirical improvement over certain data sets. All the reviewers agree that the paper provides useful theoretical insights into model-based reinforcement learning, and the experiments are also consistent with the theory. | test | [
"d9iFUQp_SGd",
"9rhqQ9HN2CB",
"WjKE4fAi48U",
"9p2kgfusP0R",
"VZVWseLjx5",
"LX4D7us5oNf",
"vUxm3lzapf",
"-kX-Fc2HiaQ",
"_tRLzDnDxZ8",
"ytLLQBNsrPW",
"ByB5gJlmnZ",
"EpGEfLOCk6r",
"v22RSWkON3p",
"ARIV52KLKC6",
"qrfp7qoLFZK",
"XLRFOT2Rk_U",
"5bxM80s2Ip5",
"4Tzy8Q7gZxCs",
"CBCavhzBf0",
"2oNSoPueuVvE",
"E_AfWbv7fkf",
"3TJ5MnJnMy",
"0JpPFh_kT6U",
"YWbdrONZQBT",
"LxVUXd3xRf-",
"GOy9QZzel1W",
"gE0s5OSV_Mh",
"OXFNzAfvdfh",
"yJSAXbpbIFZ",
"cUL37YAAdzw",
"eDK-jBQyyK5",
"zS2AU-1To0",
"dVozEhUEFkb",
"k54_3pS64ui",
"z5OdYKGN5rO",
"6fNAdnkWbqK",
"cOEMvbUkq24",
"EU-7jX_Byjy",
"X2m43Ei1ucL",
"gpEG3gTGBoC",
"Uyxxl36GoST",
"ACK1veiIHN4"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer, \n\nWe thank the reviewer for your insightful and constructive comments and suggestions, which provide much helpful guidance to improve the quality of our paper! We really enjoy communicating with you and appreciate your efforts! \n\nBest wishes!\n\nThe authors.",
" Thanks for the comments. Yes, I agree with the authors on the decoupling aspect and I think that is correct. I acknowledge the fact that a general proof for 4.7 is hard but wanted to emphasize that is critical as the analysis is somewhat strongly relying on the proof of 4.7. Hence, that will be important even for this work.\nHowever, under some assumptions, the authors have shown the proof which I believe is sufficient given this work points to an interesting question.\n",
" Dear reviewer, \n\nThank you for helping us improve the paper and for updating the score! We really enjoy communicating with you and appreciate your valuable suggestions! \n\nBest wishes!\n\nThe authors.",
" Thanks for the authors' rebuttal and their efforts to improve this work. As the response has addressed my concerns, I would like to increase my score by 1 (6, weak accept). Best wishes.",
" Thank you for quickly response! We really enjoy communicating with you and appreciate your efforts.\n\n> **Q1:** The authors use $vol(S_D)$ to estimate the size of the state coverage $S_{pc}^{\\pi_i}$ of the current policy $\\pi_i$. However, the replay buffer $\\mathcal D$ stores all samples from the initial policy $\\pi_0$ to the current one $\\pi_i$, which is not the subset of $\\mathcal S_{pc}^{\\pi_i}$. The authors may want to explain why they use D to estimate the state coverage of the current policy as a practical implementation.\n\nSorry for the unclear statement in the previous response. We use $vol(S_{\\mathcal D})$ to estimate “the full range of state space can be explored until current\", or $\\bigcup_{i} S_{pc}^{\\pi_i}$ for clarity.\n\nIn $\\sum_{s'\\in\\mathcal S}|P_{M_1}(s'|s,a)-P(s'|s,a)|$ , as long as the tuple $(s,a,s')$ is the real interation data, it could be used for computing, so that we do not need to care about these tuples are collected by what policy. As the $s'$ we have access to are not limited to the current policy collection, it is reasonable for us to estimate them by replay buffer $\\mathcal D$. \n\nThank you for helping us improve the paper! \n\n\n\n> **Q2:** The authors may want to provide the detailed formula for computing $vol(\\mathcal S_{\\mathcal D})$. \n\nThank you for your suggestions! \n\nAbout computing the $vol(S_{\\mathcal D})$ of the convex closure $S_{\\mathcal D} = \\{ \\sum_{s_i \\in \\mathcal D} \\lambda_i s_i: \\lambda_i \\geq 0, \\sum_i \\lambda_i = 1\\}$ constructed on the replay buffer D: \n\nWe first sample $N$ Tuples from the replay buffer $\\mathcal D$, then perform Principal Component Analysis for dimension reduction, and then leverage **the Graham-Scan algorithm** to construct a convex hull and output the convex hull volume of these $N$ points. \n\nThe area of the convex hull of 2-D points $P_1, \\ldots, P_n$ is: \n\n$\\int_{x=-\\infty}^{\\infty}\\int_{y=-\\infty}^{\\infty} \\mathbb 1_{\\sum_{i=1}^{n} \\lambda_i P_i : \\lambda_i \\ge 0, \\sum_{i=1}^{n} \\lambda_i = 1 } (x,y) dx dy$\n\nWhere $\\mathbb{1}_{A}(x,y)$ is the indicator function of 2-D set $A$.\n\nSpecifically, for computing the area of the 2-D convex hull, we use the Graham scan algorithm described as follows:\n\n---\n\nInput: $n$ 2-D points $P_1, \\ldots, P_n$\n\nOutput: the area of the convex hull of $P_1, \\ldots, P_n$\n\nSteps:\n\n1. Find the left-then-lowest point $P_s$\n2. Initialize a point-stack $S$ containing only $P_s$\n3. Sort the remaining points in counterclockwise order around $P_s$\n4. For each point $P_i$ in the sorted list of remaining points:\n Repeat\n Denote the top point of $S$ as $P_j$, the second-to-top point as $P_k$\n If $P_i$ is **to the left** of the line $\\overrightarrow{P_k P_j}$:\n break\n Else:\n Pop $P_j$ off $S$\n End Repeat\n Push $P_i$ onto $S$\n5. Return the area of the polygon defined by the points in $S$\n\nNotes:\n\n1. Point $A$ is \"to the left\" of the line $\\overrightarrow{B C}$, if the cross product of $\\overrightarrow{B C}$ and $\\overrightarrow{B A}$ is positive.\n2. The area of the polygon defined by points $P_1, \\ldots, P_n$ is $\\frac{1}{2} \\sum_{i=1}^{n} (P_{i} \\times P_{(i+1)\\mod n})$\n\n---\n\n\nThank you again for your comments! We will continue to improve the paper according to your careful suggestions. Hope we have resolved your concerns and we are glad to have further discussions if you have any questions. ",
" Dear reviewer: \n\nThank you for your reply and understanding. We will update our paper considering your and other reviewers' comments accordingly. We sincerely thank you for your comprehensive comments.\n\nBest wishes!\n\nThe authors.",
" Thank the authors for the clarification and detailed explanation! I decided to increase my score to borderline accept. But I still feel the studied problem is of less theoretical interest, as the single-step predictive model is relatively easy to learn in experiments (at least Mujoco), and the efficient exploration claim is not theoretically justified.",
" Thanks for the authors' response. I have a further concern for $vol(\\mathcal{S}\\_\\mathcal{D})$, and one of my concerns remains the same.\n\n1. The authors use $vol(\\mathcal{S}\\_\\mathcal{D})$ to estimate the size of the state coverage $\\mathcal{S}^{\\pi\\_i}_{pc}$ of the current policy $\\pi\\_i$. However, the replay buffer $\\mathcal{D}$ stores all samples from the initial policy $\\pi\\_0$ to the current one $\\pi\\_{i}$, which is not the subset of $\\mathcal{S}^{\\pi\\_i}\\_{pc}$. The authors may want to explain why they use $\\mathcal{D}$ to estimate the state coverage of the current policy as a practical implementation.\n\n2. The authors may want to provide the detailed formula for computing $vol(\\mathcal{S}\\_\\mathcal{D})$.",
" > **Q1** \"Thank the authors for the comment. It clarifies the difference between their work and previous ones. However, the advantage is still not clear. For example, regarding \"when , this approximation fails\", is the model error of the policy w.r.t. a fixed (e.g. local) model, which already gives nice monotonic property when assuming access to optimization oracle (i.e. no need to consider future model, model shift etc). The proposed method seems to be the dual form of works such as DPI (i.e. not limiting policy, but limiting models).\"\n\nThank you very much for appreciating that our work is novel from previous works in problem modeling, theoretical structure, and algorithm design. To better address your concerns, we will explain the advantages of our theory and our algorithm based on the previous response more concisely.\n\nOur novel monotonicity analysis from the global view does not conflict with the local view (i.e., we marginate the monotonicity of the local view by policy optimization oracle). Notably, we aim to tackle the issues that the local view theory ignored. \n\nFirst of all, let us elaborate on **the issues ignored by local view**, and our theory tackles these problems. \n\n* Issue 1: \n * Firstly, when $\\epsilon_m$ is large, the performance of the policy in the real environment $V^\\pi(\\mu)$ will be poor: $\\epsilon_m$ means that model is quite different from the environment and the discrepancy $C(\\epsilon_m, \\epsilon_\\pi)$ is large. Thus, even if $V_M^{\\pi}(\\mu)$ can be monotonic, the corresponding performance evaluated in the real environment $V^\\pi(\\mu)$ may be quite low for $V^\\pi(\\mu) \\geq V^\\pi_M(\\mu) - C(\\epsilon_m, \\epsilon_\\pi)$. Secondly, when $\\epsilon_m$ is large, local view analysis could not guarantee that the $V^\\pi(\\mu)$ is non-decreasing: it is difficult to satisfy that policy improvement in a single policy iteration is higher than $C(\\epsilon_m, \\epsilon_\\pi)$, so $V^\\pi(\\mu)$ cannot be guaranteed to be monotonically increasing in the real environment either.\n * $\\epsilon_m$ may not shrink upon model updating, as the local view cannot guide the model updating. Then, the policy performance still cannot grow in the future model, and the monotonicity across models fails. \n* Issue 2:\n * An important fact, which we provided in our response to Major concerns 2.1, implies model shift or model error stays a substantial influence during the MBRL training process. Thus, it is important to consider the varying model shift. (i.e., considering future model, model shift, etc.)\n * Previous works gave an upper bound on the distribution shift of all models. These solutions would be very coarse if only the upper bound of the model shift given. Even worse, since the given upper bound is likely too large (refer to issue 1), it will fail to find a feasible solution considering monotonicity per policy iteration in practice, thus making the monotonicity guarantee fails.\n\n* Issue 3:\n * Model quality is the bottleneck of the MBRL algorithm performance, so it is crucial to improve model quality. However, local view analysis can only guide policy iteration but not for model updating. They crudely treat model learning as a supervised learning process independent from policy exploration.\n * Notably, a crucial complexity of MBRL is that besides that model quality affects policy quality, policy, in turn, does affect the outcome of model learning via the data collected from its interaction with the environment, so that a crude supervised learning process is not enough. \n\nIn a nutshell, it is useful to consider the future model and model shift because the monotonicity analysis given under a fixed model would fail and ignore some vital nature of MBRL. ",
" Regarding the problems mentioned above, we will show **the advantages of our theory in dealing with these problems**.\n\n* Towards issue 1: When $\\epsilon_m$ is large, it is useless to worry about whether performance can be improved after a single policy iteration, and it is difficult to meet the monotonicity requirements, as detailed in Issue1. At this point, model quality is the key to performance improvement. Our theory focuses on the sub-optimal policy $\\pi_i$ evaluated in each model $M_i$ and we guarantee the real performance $V^{\\pi_i}(\\mu)$ is non-decreasing. Thus, we can guarantee monotonicity across models, and help improve the model quality. \n\n* Towards issue 2: Instead of giving a lazy upper bound of model error and then throwing out the important nature of model varying, our theory considers what the model shifts (might be seen as the difference in two model errors just for understanding) will affect the eventual corresponding policy performance. Overall, our method models the impact of $\\epsilon_m$ changes in performance and uses this impact to help optimization.\n\n* Towards issue 3: We provide a novel measurement of the learned model quality instead of the validation loss in previous work. As we detailed in our paper, Theorem 4.3 implies that if the model update $M_1\\rightarrow M_2$ can shorten the divergence between the estimated dynamics and the true dynamics and improve the ceiling performance on the model, it may guarantee overall performance improvement under the true dynamics. Then, we could say that $M_2$ is better than $M_1$. Further, Proposition 4.7 guides when to update our model to keep improving the model quality and overall policy performance. \n\nFurthermore, note that considering the MBRL process, **the guidance of our theory to the algorithm** (especially in model optimization)is distinct and important, while previous work on local view failed to do so. We list the guidance advantages as follows and the detailed explanation refers to our response to Q3: \n\n1. Guide the model improvement.\n2. Guide the policy exploration.\n3. The policy optimization oracle allows using many local view results. Note that, we indeed have adopted the MBPO (the discrepancy bound class) results in main experiments, and the TRPO (the API class) results in the ablation study.\n\n\nBesides, regarding your concern about DPI, it seems that we are in the dual direction intuitively. Our theoretical analysis is not only quite distinct from DPI, as we stated in our response to Q1, but we also have **considerable advantages over DPI**. For example, our theories are free of some strong assumptions of DPI. DPI requires that the reward function is a quadratic function, but our theory and algorithm work well under any reward function $r \\in [-R, R]$.",
" > **Q2**: What I find interesting is that the proposed method might encourage exploring by not imposing constraints on policy updates. However, current results (monotonic improvement) are not enough for concluding efficient exploration, which can still get stuck at local optima. I suggest the authors take this into consideration, given the unclear theoretical advantage compared to existing methods.\n\nThanks for your valuable suggestions!\n\nAs far as we know, effective exploration does not mean a globally optimal solution, and the model-based RL algorithms that we have known cannot guarantee achieving a totally global optimal solution in complex scenarios.\n\nWe illustrated in the experiments that our theory and algorithm can **promote exploration**, which is able to help us **improve local optima**. Our higher local optima can be seen from our outstanding learning curves. And here we provide a comparison in policy coverage to show our better exploration property. The policy coverage increasing with the stages means that the policy has new explorations at every stage and may not fall into a poor local optimum.\n\n| Env | Algo | Stage1 | Stage2 | Stage3 | Stage4 | Stage5 |\n| ----------- | ---- | ---------- | ---------- | ---------- | ---------- | ---------- |\n| HalfCheetah | CMLO | 138.566126 | 182.281857 | 243.466268 | 302.816499 | 344.356213 |\n| HalfCheetah | MBPO | 129.251405 | 173.085743 | 242.492030 | 264.853917 | 338.555574 |\n| Ant | CMLO | 354.154379 | 744.91538 | 849.473640 | 876.119479 | 909.798043 |\n| Ant | MBPO | 342.134362 | 729.295456 | 821.658472 | 864.933838 | 880.252964 |\n\nWe hope the explanation could resolve your concerns and help to understand the advantages of our theory and algorithms. If you have further questions, we are glad to discuss them with you. Thanks again for your comments and we sincerely wish you would reconsider your score.\n",
" Thank you for your additional feedback! We respond to the concerns below:\n\nAbout Q3, since we can not obtain $d^{\\pi_{2}}$ directly, we turn to approximate it using previously sampled data. And it does exist distribution mismatch. \n\nAbout why we say the distribution mismatch is tolerable, we verified it by experiments in our paper. In Figure 3 of Appendix E.5 (quickly see in https://anonymous.4open.science/r/Picture-246D/README.md), we compare the estimation of model shifts after updating (when we can obtain samples from $d^{\\pi_{2}}$) and pre-estimation of model shifts before model updating (what we use for approximation). From this practical result, we can see their trends stand consistent, and the bias in the approximation is tolerable as it can be bridged by adjusting the constraint threshold $\\alpha$. \n\nBefore, we provided a proof that similar model derives similar distribution from the perspective of control theory under several assumptions in the Appendix. \n\nBelow we try to explain from the general MBRL process perspective, but it is not the focus of our paper. With a limited model shift and rollout policy shift, the difference between the rollout data distributionsfrom two models is limited. Because the policies are derived from the rollout data and then collect interation data, the interaction data difference might also be limited. Specifically, let model shift $\\epsilon_{M_1, M_2}^{\\pi} = E_{s,a\\sim P_{M_1}^\\pi,\\pi}[D_{TV}(P_{M_1}(\\cdot\\vert s,a)\\Vert P_{M_2}(\\cdot\\vert s,a)]$, rollout policy in $M_1$ be $\\pi$, rollout policy in $M_2$ be $\\pi'$, rollout policy shift be $\\delta_{\\pi, \\pi'}^{M_1} = E_{s\\sim d_{M_1}^{\\pi}}[D_{TV}(\\pi'(a\\vert s)\\Vert \\pi(a\\vert s))]$. \n\n\nThen, we have that $\\Vert d_{M_2}^{\\pi'} - d_{M_1}^{\\pi} \\Vert_1 \\leq \\frac{2}{1-\\gamma} (\\delta_{\\pi,\\pi'} + \\epsilon_{M_1, M_2}^\\pi)$ , which is limited as well. We assume that the policies derived from these two distribution are similar as well. Then we have that $d^{\\pi_2}$ similar to $d^{\\pi_1}$ as well, i.e., $\\Vert d^{\\pi_2} - d^{\\pi_1}\\Vert_1 \\leq \\frac{2}{1-\\gamma}\\delta_{\\pi_1,\\pi_2}$\n\nThank you again for your reply. Hope we have resolved your concerns and we are glad to have further discussions if you have any questions.",
" Thank you for your reply. We really enjoy communicating with you and appreciate your efforts. \n\nIt is hard to give a direct solution to solve the constrained optimization problem in Proposition 4.7 under a general setting. A general proof for a feasible solution of Proposition 4.7 is quite exciting but is not the focus of the work yet. \n\nThe event-triggered mechanism is a practical design to follow the inspiration from Proposition 4.7. Through the event-triggered mechanism, we can decouple the constraint and objective of this intractable constraint optimization problem and solve them asynchronously, by detecting the model shifts constraint in the policy exploration stage and optimizing the objective function in the model training stage. Besides, the special feasible solution example in Corollary 4.8 implies that the dynamically varying model training interval may help the monotonicity. This inspiration is consistent with the event-triggered mechanism.\n\nHope we have resolved your concerns and we are glad to have further discussions if you have any questions. ",
" \nDear reviewer, \n\nThank you for helping us improve the paper and for updating the score! We really appreciate your comments and suggestions!\n\nBest wishes!\n\nThe authors.",
" Thanks the authors for their thorough response as well as their efforts to revise the paper to make it more clear. I have increased my score by 1. ",
" Thank you for your reply! We respond to the concerns below:\n\n> **Q1:** The authors use $\\sum_{s'\\in\\mathcal S}|P_{M_1}(s'|s,a)-P(s'|s,a)|$ to estimate the model shifts in the response to Q1, which is different from $vol(\\mathcal S_{\\mathcal D})\\cdot \\mathcal L(\\Delta \\mathcal D)$ in Line 247. The authors may want to provide the detailed derivation from $\\sum_{s'\\in\\mathcal S}|P_{M_1}(s'|s,a)-P(s'|s,a)|$ to $vol(\\mathcal S_{\\mathcal D})\\cdot \\mathcal L(\\Delta \\mathcal D)$.\n\nHere, $vol(\\mathcal S_{\\mathcal D})\\cdot \\mathcal L(\\Delta \\mathcal D)$ is a practical design.\n\nFirstly, $\\sum_{s'\\in\\mathcal S}|P_{M_1}(s'|s,a)-P(s'|s,a)|$ is a intermediate approximation result, from $D_{TV}(P_{M_1}(\\cdot|s,a)||P_{M_2}(\\cdot|s,a))$ to $vol(\\mathcal S_D)\\cdot \\mathcal L(\\Delta \\mathcal D)$. \n\nThen, $\\sum_{s'\\in\\mathcal S}|P_{M_1}(s'|s,a)-P(s'|s,a)|$ is incalculable directly because the whole state space $\\mathcal S$ is unknown and inaccessible. During the training process, we can not access all state $s'$ covering $\\mathcal S$. And, the current state coverage $\\mathcal S_{\\mathcal D}$ would be varying, as shown in Figure 3(b). Thus, $vol(\\mathcal S_{\\mathcal D})$ should be taken into consideration.\n\nHence, we use $vol(\\mathcal D)$ to estimate the summation space size and adopt the average predictive error $\\mathcal L(\\Delta \\mathcal D)$ for the disagreement on newly encountered data $\\Delta \\mathcal D$.\n\n> **Q2:** The authors provide the details for computing the proposed volume $vol(\\mathcal S_{\\mathcal D})$ in the response to Q6. The authors may want to further provide the detailed formulation of $vol(\\mathcal S_{\\mathcal D})$.\n\n$vol(S_D)$ is a notation in our algorithm for the state(policy) coverage, which means the range of state spaces that the current policy $\\pi_i$ can explore in the real environment. In our implementation, we adopt $vol(S_D)$ to estimate the size of the state space $\\mathcal S_{pc}^{\\pi_i}$ so that we use $\\mathcal S_{\\mathcal D}$ for denotation. Here, we give a more formal definition about the state(policy) coverage, we use $\\mathcal S_{pc}^{\\pi_i}$ instead of $\\mathcal S_{\\mathcal D}$ here for clarity. For a given policy $\\pi$, the corresponding state(policy) coverage $\\mathcal S_{pc}^{\\pi_i}$ can be formulated as $\\mathcal S_{pc}^{\\pi_i}: \\forall s\\in \\mathcal S_{pc}^{\\pi_i}, a\\sim\\pi_i(\\cdot|s), s'\\sim P(\\cdot|s,a)\\in\\mathcal S_{pc}^{\\pi_i}$. And, $vol(\\mathcal S_{pc}^{\\pi_i})$ is the size of the state space $\\mathcal S_{pc}^{\\pi_i}$.\n\n> **Q3:** The estimation of the model shifts in Line 247 is $vol(\\mathcal S_{\\mathcal D})\\cdot \\mathcal L(\\Delta \\mathcal D)$, which is conflict with $vol(\\mathcal S_{\\mathcal D}\\cup\\Delta \\mathcal D(\\tau))\\cdot \\mathcal L(\\Delta \\mathcal D(\\tau))$ in the response to Q7.\n\n$vol(\\mathcal S_{\\mathcal D_t}\\cup\\Delta \\mathcal D(\\tau))\\cdot \\mathcal L(\\Delta \\mathcal D(\\tau))$ is a temporal formulation of $vol(\\mathcal S_{\\mathcal D})\\cdot \\mathcal L(\\Delta \\mathcal D)$ so that these two terms are not in conflict.\n\nIn the term $vol(\\mathcal S_{\\mathcal D})\\cdot \\mathcal L(\\Delta \\mathcal D)$, $vol(\\mathcal S_{\\mathcal D})$ means the policy coverage of the learned model $P_{M_2}$, and $\\mathcal L(\\Delta \\mathcal D)$ represents the disagreement on newly encountered data $\\Delta \\mathcal D$ in the interval from $P_{M_1}$ to $P_{M_2}$. \n\nFor the term $vol(\\mathcal S_{\\mathcal D_t}\\cup\\Delta \\mathcal D(\\tau))\\cdot \\mathcal L(\\Delta \\mathcal D(\\tau))$, we get the learned model $P_{M_1}$ at timestep $t$ so the replay buffer is $\\mathcal S_{\\mathcal D_t}$. When the derived policy interacts with the environment for a period of time $\\tau$, the newly encountered data is $\\Delta \\mathcal D(\\tau)$. Thus, we adopt $vol(\\mathcal S_{\\mathcal D_t}\\cup\\Delta \\mathcal D(\\tau))$ and $\\mathcal L(\\Delta \\mathcal D(\\tau))$ for the estimation of model shift at timestep $t+\\tau$. Sorry for missing the subscript $t$ in $\\mathcal S_{\\mathcal D_t}$ in our previous response to Q7.\n\nThank you again for helping us improve the paper! We will continue to polish our paper according to your careful suggestions. Hope we have resolved your concerns and we are glad to have further discussions if you have any questions.",
" Thanks for the comments and explanations. I agree with the justification given on Q2. \n\nHowever, the justification on Q3 is still unclear to me as to how the approximate method can tackle the distribution mismatch for this particular setting ? Can you provide a more detailed justification on why will the distribution mismatch be tolerable for this particular setting of MBRL. Thanks.",
" Thanks for the authors' rebuttal. I have read the response and all the other reviewers' comments. However, I still have some concerns of the constraint estimation, which have not been properly addressed.\n\n1. The authors use $\\sum\\_{s' \\in \\mathcal{S}} | P\\_{M\\_1}(s'|s,a) - P(s'|s,a) |$ to estimate the model shifts in the response to Q1, which is different from $vol( \\mathcal{S}\\_{\\mathcal{D}}) \\cdot \\mathcal{L} ( \\Delta \\mathcal{D})$ in Line 247. The authors may want to provide the detailed derivation from $\\sum\\_{s' \\in \\mathcal{S}} | P\\_{M\\_1}(s'|s,a) - P(s'|s,a) |$ to $vol( \\mathcal{S}\\_{\\mathcal{D}}) \\cdot \\mathcal{L} ( \\Delta \\mathcal{D})$.\n\n2. The authors provide the details for computing the proposed volume $vol(\\mathcal{S}\\_\\mathcal{D})$ in the response to Q6. The authors may want to further provide the detailed formulation of $vol(\\mathcal{S}\\_\\mathcal{D})$.\n\n3. The estimation of the model shifts in Line 247 is $vol( \\mathcal{S}\\_{\\mathcal{D}}) \\cdot \\mathcal{L} ( \\Delta \\mathcal{D})$, which is conflict with $vol(\\mathcal{S}\\_{\\mathcal{D} \\cup \\Delta \\mathcal{D} (\\tau)}) \\cdot \\mathcal{L}(\\Delta \\mathcal{D} (\\tau))$ in the response to Q7.",
" Thanks for the comments. I understand that 4.8 is an example and not the primary result. I could not find a general proof without the generative model assumption, can you please point me to the same.",
" Thank the authors for the comment. It clarifies the difference between their work and previous ones. However, the advantage is still not clear. For example, regarding \"when $\\epsilon_m>0$, this approximation fails\", $\\epsilon_m$ is the model error of the policy w.r.t. a fixed (e.g. local) model, which already gives nice monotonic property when assuming access to optimization oracle (i.e. no need to consider future model, model shift etc). The proposed method seems to be the dual form of works such as DPI (i.e. not limiting policy, but limiting models). \n\nWhat I find interesting is that the proposed method might encourage exploring by not imposing constraints on policy updates. However, current results (monotonic improvement) are not enough for concluding efficient exploration, which can still get stuck at local optima. I suggest the authors take this into consideration, given the unclear theoretical advantage compared to existing methods.",
" Dear reviewer,\n\nWe first thank you again for your comments and suggestions. We hope our last reply has resolved all your concerns. If you have any other questions, we are also pleased to respond. We sincerely look forward to your response.\n\nBest wishes!\n\nThe authors.",
" Dear reviewer,\n\nWe appreciate your comments and suggestions. We hope our last reply has resolved all your concerns. We have refined our explanation and added experiments as you suggested. If you have any other questions, we are also pleased to respond. We sincerely look forward to your response.\n\nBest wishes!\n\nThe authors.",
" Dear reviewer,\n\nWe first thank you again for your valuable comments and suggestions. In the previous replies, we think we have addressed your concerns point by point and refined details in the rebuttal revision as you suggested. We sincerely look forward to your reply to our response.\n\nBest wishes!\n\nThe authors.",
" **Response summary.** Thanks for your review of our work. Based on your view, it seems that you have two main concerns and several other minor concerns.\n\n1. **The advantage of global view monotonicity**: Why do we care about the *optimal* value under a model and what is the advantage compared to the local view?\n\n2. **Comparison between CMLO and previous works**: It would be better to have more comparisons between CMLO and previous works (e.g. DPI), although they have different proof structures.\n\nWe provide clarification to your concerns as below. We appreciate it if you have any further questions or comments.\n\nFirstly, we will elaborate on the advantages of our theoretical scheme and results, and give a comparison with previous work.\n\n**Major Concerns**\n\n**1. Intuition and Summary**: \n\n MBRL methods alternate between the two stages: model learning and policy optimization. This is a chain reaction of alternating two stages. Analyzing only the effect of one on the other is not enough. As is known, in MBRL, model accuracy often acts as the bottleneck to policy performance. \n In previous works with the local view, they analyze the effect of model accuracy on policy performance by coarsely assuming an upper bound of model bias $\\epsilon_m$. \n However, a crucial complexity of MBRL is that besides that model accuracy affects policy quality, policy, in turn, does affect the outcome of model learning via the data collected from its interaction with the environment. \n\n In a nutshell, the following problems are crucial in MBRL and have been less explored and not well guaranteed in the local-view works. \"How does the policy affect model updating? What is a indeed better model in MBRL? Can model-based RL algorithms be guranteed to improve the policy monotonically when considering model shifts? \"\n\n Our proposed global view monotonicity analysis focuses on the issues mentioned above and has several advantages:\n\n * Our analysis considers how the policy exploration affects the model shifts, and then can help to improve model accuracy. However, the local view ignores the varying model shifts and crudely treats model learning as a supervised learning process independent from policy exploration.\n\n * Our theory indicates that $M_{i+1}$ is better than $M_i$ when the performance of sub-optimal derived policy $\\pi_{i+1}$ evaluated under the real environment is higher than that of $\\pi_i$. In contrast, the local view merely adopted the validation loss to measure the quality of a learned model, which is an isolated measurement ignoring the inherently entangled nature of MBRL. \n\n * Our theory provides a monotonicity guarantee considering the varying model shifts. We will detail the drawbacks of the local view analysis due to the disregarding of model shifts as follows.",
" **2. Detailed Analysis**: \nTo begin with, an important fact is that the effect of model shifts on trajectories is drastic. For example, even when the system dynamics satisfy $L$- Lipschitz continuity, along with the policy and the initial state be the same, the difference in trajectories sampled in $M_1, M_2$ grows at $e^{LH}$ with the length $H$ of the trajectory [1]. As the model shifts decay, the trajectory discrepancy will also decrease sharply. It implies that model shift stays a substantial influence during the MBRL training process. There are two main trends of local view analysis (for readability, the conclusions in previous work will be rewritten with the notation of our paper). \n\n(1) **API [2] class:** Their recipe for monotonicity analysis is $V^{\\pi_{n+1}} (\\mu)- V^{\\pi_n}(\\mu) \\geq C(\\pi_n, \\pi_{n+1}, \\epsilon_m)$. If policies update $\\pi_n\\rightarrow \\pi_{n+1}$ could provide a non-negative $C(\\pi_n, \\pi_{n+1}, \\epsilon_m)$ , then the performance is guaranteed to increase. Here, $\\epsilon_m = \\max_{\\pi\\in \\Pi, M\\in \\mathcal M} E_{s,a\\sim d^{\\pi}}[\\mathcal D_{TV}(P(\\cdot\\vert s,a)\\Vert P_M(\\cdot\\vert s,a))]$ .\n* Most previous works ([2] [3]) were derived under model-free settings ($\\epsilon_m=0$) through conservative policy iteration, e.g., by forcing $\\mathcal D_{TV}(\\pi_n\\Vert \\pi_{n+1})\\leq \\alpha$), then the state-action distribution are close as well $\\mathcal D_{TV}(d^{\\pi_n}\\Vert d^{\\pi_{n+1}})\\leq \\frac{\\alpha\\gamma}{1-\\gamma} $, so that they can optimize over their performance difference lemma $C(\\pi_n, \\pi_{n+1}, 0) \\approx \\frac{1}{1-\\gamma}E_{s,a\\sim d^{\\pi_n}}[A^{\\pi_n}(s,a)] $.\n* When $\\epsilon_m>0$, this approximation $C(\\pi_n, \\pi_{n+1}, \\epsilon_m) \\approx \\frac{1}{1-\\gamma}\\mathbb{E}_{s,a\\sim d^{\\pi_n}}[A^{\\pi_n}(s,a)] $ fails. \n\n* **DPI**: Firstly, DPI [4] focused on policy optimization in a fixed model, cf. Theorem 3.1, is why we call it \"a fixed model perspective\".\n\nSecond, DPI tries to force $\\pi_{n+1}$ and $\\pi_n$ to be close, which will result in a high similarity of the data sampled. Then a risk arises from it, this approach would limit the growth of the policy exploration in the real environment, thus leading the inferred models to stay optimized in a restrictive local area. For example, in the Humanoid environment, the agent struggles to achieve balance at the beginning of training. An updated restricted policy will cause the exploration space to be limited in such an unbalanced distribution for a long time, and the learned model in such highly repetitive data will converge quickly with a validation loss be zero. However, the success trajectory has not been explored yet, causing both the policy and the learned model to fall into a poor local optimum.\n\nBesides, the definition of model accuracy (Eq.3) is a local view in DPI, i.e., $\\hat{P}$ is $\\delta$-opt under $d^{\\pi_n}$. If we replace model accuracy with a more general, global definition (for example, $\\hat{P}$ is $\\delta$-opt under $d^{\\pi_{n+1}}$ , or $\\hat{P}$ is $\\delta$-opt under all $(s,a,s')$ tuples), we find that the $\\delta$ in (Eq. 3) will be large at the initial steps, making it difficult to obtain a local optimal solution in Theorem 3.1. Finally, theoretical analysis in DPI can only guide the policy iteration process, while the update of the model is passive, which is different from our global view theory.",
" (2) **Discrepancy bound class:** They derive upon $V^{\\pi_n}(\\mu)\\geq V_M^{\\pi_n}(\\mu) - C(\\epsilon_m, \\epsilon_\\pi)$. As guaranteed in them, once a policy update $\\pi_n \\rightarrow \\pi_{n+1}$ has improved returns under the same model $M$, i.e., $V_{M}^{\\pi_{n+1}}(\\mu) > V_{M}^{\\pi_n}(\\mu) + C(\\epsilon_m, \\epsilon_\\pi)$ , it would improve the lower bound on the performance evaluated in the real environment, i.e., $\\inf\\{V^{\\pi_2\\vert M}(\\mu)\\} > \\inf \\{ V^{\\pi_1\\vert M(\\mu)}\\} $\\}. \n\nTheir theory is based on a fixed model $M$, or an upper bound on the distribution shift of all models $\\epsilon_m$. It does not concern the change in model dynamics during updating, nor the performance varying due to the model shift. Moreover, The solution would be very coarse if only the upper bound of the model shift is given. Even worse, the given upper bound is likely to be too large, then it will fail to find a feasible solution for $V_M^{\\pi_{n+1}}(\\mu) - V_M^{\\pi_n}(\\mu) \\geq C(\\epsilon_m, \\epsilon_\\pi)$ in practice, thus making the monotonicity guarantee fails. \n\nIn summary, our proposed theoretical framework provides a new perspective on model-based RL monotonicity which considers the entangled nature of model learning and policy optimization. It will be useful for guiding the optimization model updates and might be helpful to understand several perspectives of model-based RL that have been rarely explored before. \n\n [1] Li H, Shi Y. Event-triggered robust model predictive control of continuous-time nonlinear systems[J]. Automatica, 2014.\n\n [2] Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning.In ICML, 2002.\n\n [3] Thanard Kurutach et al. Model-ensemble trust-region policy optimization. In ICLR, 2018.\n\n [4] Wen Sun et al. Dual policy iteration. In NeurIPS, 2018.\n\n [5] Michael Janner et al.When to trust your model: Model-based policy optimization. In NeurIPS, 2019.",
" We have detailed above the advantages of our theory, and the comparison with previous approaches. We now respond to your other concerns one by one. \n\n> **Q1:** \"The monotonic results are not novel in MBRL, for example, the authors cite DPI [1], which established similar monotonic results. But the authors claim that previous works \"characterize the monotonicity in terms of a fixed model of interest\". I don't see the reason for such a claim and the exact advantage of CMLO.\"\n\nOur theoretical analysis is quite different from DPI, we have a different scheme, assumptions, proof structures, and results. For example, DPI forces $\\pi_n$ and $\\pi_{n+1}$ to be close, while we do not have such assumptions. Instead, we derive under the policy optimization oracle, which requires the $\\pi_n$ to be $\\epsilon_{opt}$-optimal under its corresponding model $M_n$. Under Dyna-style MBRL, our assumptions are more relaxed. Our theory framework allows for the replacement of different policy optimization algorithms, and thus we are free to exploit the advantages of advanced model-free algorithms. \n\nWe have detailed our advantages over DPI in the response to your **major concerns**. Besides, the practical instance of DPI may fail if the reward function could not be approximated by the quadratic function. DPI focused on the policy optimization in a fixed model, cf. Theorem 3.1, is why we call it \"a fixed model perspective\". \n\n> **Q2:** \"It would be better to have more comparisons between CMLO and previous works (e.g. DPI). Although they have different proof structures (from the local and global view?), the monotonic improvement results are similar in my view.\"\n\nThanks for your suggestion. In our response to your **major concerns**, we elaborate on the differences between our work and the previous two classes of work on monotonicity. We are distinct from them in terms of scheme, assumptions, proof structures, results, and guidance to the algorithm. And we have included the comparison in our rebuttal revision.\n\n> **Q3:** \"Why do we care about the *optimal* value under a model and what is the advantage compared to the local view? But I can't judge the necessity of doing this. E.g., in theorem 4.5, why do we care about the *optimal* value under a model? Notably, the monotonic property rather than global optimality is of interest. According to the performance difference lemma, shouldn't we focus more on the value-under-model evaluated using the *current policy*?\"\n\nWe have detailed the advantage of the global view analyses in our response to your **major concerns**. It has rarely been explored in previous MBRL. \n\nAs we discussed above, in MBRL, model quality is the bottleneck of policy eventual performance. The sub-optimal value under a model is a novel and effective way to measure the model quality. Therefore, our analysis helps to guide the model improvement, whereas previous local view monotonicity analysis is hard to do so. \n\nBesides, focusing on the optimal value could be regarded as marginating on policy iteration within a model so that our analysis is generic under Dyna-style without worrying about different policy optimization methods (as shown in our ablation study, see Figure 5.). Owing to our proposed theoretical scheme, our algorithm does not conflict with the local view, and we marginate the policy iterations under a fixed model by our policy optimization oracle. And it allows us to employ many local view methods to improve the monotonicity of single-step policy iterations.\n\nFinally, as we analyzed in our response to your major concerns, the inevitable model shifts in MBRL have a significant impact on policy performance and trajectory bias. Those works that care only about \"value-under-model evaluated using the current policy”(e.g. MBPO), usually give an upper bound on the model bias, which is a crude approach and may lead to monotonicity equations that do not have feasible solutions. ",
" > **Q4:** \"Another confusion I have is the notation of state coverage. As it is introduced from a global point of view, I can't judge its necessity before understanding the necessity of the global view. Besides, it is better to include the exact definition of state coverage in the context of RL, so that we can see why it is estimated with the replay buffer.\"\n\nDefinition: State coverage (policy coverage) is the range of state spaces that our algorithm can explore in the real environment under the current policy $\\pi_i$ (derived from the learned model $M_i$). In the existing works, [1] defined the return set for two state sub-space as $\\overline R_{ret} = \\lim_{n\\rightarrow \\infty} R^n_{ret} (X,\\bar{X} )$, \n\n\nwhere $R_{ret}^n(X,\\bar X) $ means an n-step returnability from $X$ to $\\bar X$. Referring to this definition, the state coverage of $\\pi_i$ can be defined as $\\mathcal S_{pc}^{\\pi_i}: \\forall s\\in \\mathcal S_{pc}^{\\pi_i}, a\\sim\\pi_i(\\cdot|s), s'\\sim P(\\cdot|s,a)\\in\\mathcal S_{pc}^{\\pi_i}$. Besides, in the description of La Salle's Invariance Principle [2], we verify the equivalence of Invariant Set and state coverage. Intuitively, the Humanoid example in our response to your major concerns also shows that the variation of state coverage in the different training stages.\n\nNecessity: As we discussed before, state coverage reflects the ability of policy exploration, which affects the final optimal value. When state coverage is improved, the policy $\\pi_n$ can obtain more unseen samples in the exploration phase from the real environment, which further improves the model accuracy of $M_{n+1}$ and the optimization value of the derived policy $\\pi_{n+1}$. This is also reflected by the consistent change in our Figure 3(a) state coverage and the performance in Figure 1.\n\nEstimation: We use the replay buffer $\\mathcal{D}$ to store the explored samples by policy $\\pi_n$ from the real environment. Notice that these samples are a subset of the policy coverage $\\mathcal{S}_{pc}^{\\pi_i}$, thus we can utilize $\\mathcal{D}$ to estimate the state coverage as a practical implementation.\n\nSorry for the insufficient explanation, and we have polished our description of state coverage per your concerns in the rebuttal revision.\n\n[1] Wachi A, Sui Y. Safe reinforcement learning in constrained Markov decision processes[C]//International Conference on Machine Learning. PMLR, 2020: 9797-9806.\n\n[2] Slotine J J E, Li W. Applied nonlinear control[M]. Englewood Cliffs, NJ: Prentice hall, 1991.\n\n\n> **Q5:** \"The intuition behind the event-triggered equation 5.1 is also not very clear. What will the fraction of state coverage give?\"\n\nAbout intuition: Equation 5.1 is the event-triggering condition motivated by Proposition 4.7. Proposition 4.7 implies that, when the model shift constraint boundary is touched according to the estimation, we need to pause data collecting and turn to solve the optimization objective. Besides, although turning to train models once not to violate the constraint is theoretically reasonable, we avoid doing so in practice because performing an update on data with a minor shift in coverage and distribution is wasteful and may risk overfitting (Line 250-253). \n\nAbout fraction: The numerator $vol (\\cal S_D)$, on the one hand, is to reduce numerical errors; on the other hand, this fraction reflects the relative change of the policy coverage and model shift if we turn to train $M_2$ under different $\\tau$ starting from $M_1$. This fraction reflects the current ability to digest new data. It can facilitate the setting of threshold $\\alpha$, for we do not need to tune $\\alpha$ once the policy coverage updates.\n\n> **Q6:** \"How much additional time does it cost for estimating model shifts?\"\n\nActual computation time: compared to the unconstrained cases (ablation studies in Figure 2), our total training time increased by an average of 3.24h in the HalfCheetah environment (300k steps) and 3.96h in the Ant environment (300k steps). And our computing infrastructure and CMLO computational time was listed in Appendix Table 5. \n\nTime complexity analysis: \n\n* For state-space coverage: we perform Principal Component Analysis to reduce the dimension and then leverage the Graham-Scan algorithm to construct a convex hull of these $N$ points, which only takes $O(N \\log N)$ for time complexity.\n\n* For model divergence: We estimate the model divergence ($K$ ensemble models) by computing the average prediction error on $N$ newly encountered data, which only takes $O(KN)$ for time complexity.\n\nFinally, we hope we resolve all of your concerns and will continue to polish our language and the clarity in our revision. Thanks again for your comments and we wish you could reconsider your score.\n",
" Thank you for your valuable comments and suggestions, which are of great help to improve the quality of our work. We sincerely appreciate your positive comments on our proposed theory as a novel and interesting analysis to address an important question. We carefully answer each of your concerns as below.\n\n> **Q1:** \"The primary weakness lies in various assumptions that have been made to derive some of the primary results that are not very general. To begin with, the main result in Corollary 4.8 is shown with a generative model assumption which is quite restrictive. The primary novelty of the research from a solution perspective lies in designing the event-triggered mechanism and the theory is shown with a generative model assumption also with linear quadratic regulator (which I still believe can be relaxed) and not for a general scenario.\"\n\nThere might be a misunderstanding. We design the event-triggered mechanism based on Proposition 4.7 which doesn't depend on the generative model assumption and linear quadratic regulator.\n\nCorollary 4.8 is not a primary result but just an example. It is used to support the motivation of designing dynamically varying model training intervals but not to guide the algorithm design directly. To give Corollary 4.8, we followed assumptions from these papers [1] [2], and these assumptions would not hurt the generalizability of the event-triggered mechanism. It is indeed exciting to give some other feasible solutions for Proposition 4.7 under weaker assumptions, but it is not the focus of this work yet. \n\n[1] Alekh Agarwal, Sham Kakade, and Lin F Yang. Model-based reinforcement learning with a generative model is minimax optimal. In Conference on Learning Theory, 2020.\n\n[2] Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, and Yuxin Chen. Breaking the sample size barrier in model-based reinforcement learning with a generative model. In Advances in Neural Information Processing Systems, 2020.",
" \n> **Q2:** “it is not very clear how minimization of the constrained objective function in Proposition 4.7 turns out to be simple negative likelihood minimization. ”\n\nIt is not trivial to turn the objective function in Proposition 4.7 directly into a loss function. Although not directly derived, negative likelihood minimization (NLL) has an optimization objective consistent with it. The objective function encourages us to alleviate model error as much as possible. It motivates us to use NLL for implementation in our practice. NLL method is typically adopted[1] [2] and is effective in learning the transition dynamics of probabilistic models. \n\nWe detailed the formulation of our model learning in appendix Line 172-178. To be specific, each dynamical model $f_{\\phi_i}$ in the ensemble is a probabilistic neural network that outputs a Gaussian distribution with diagonal covariance , $ f_{\\phi_i}(\\cdot\\vert s_t, a_t) = {\\cal N}(\\mu_{\\phi_i}(s_t, a_t), \\Sigma_{\\phi_i}(s_t, a_t))$. These models are trained independently via maximum likelihood. Thus the corresponding loss function is:\n\n${\\cal L}^H(\\phi_i) = \\sum\\limits_{t}^{H}[\\mu_{\\phi_i}(s_t,a_t)-s_{t+1}]^T\\Sigma_{\\phi_i}^{-1}(s_t,a_t)[\\mu_{\\phi_i}(s_t,a_t)-s_{t+1}] + \\log \\det \\Sigma_{\\phi_i}(s_t,a_t)$\n\nAnd the prediction for these ensemble models is, $\\hat s_{t+1} = \\frac{1}{K}\\sum_{i = 1}^{K} f_{\\phi_i}(s_t, a_t)$.\n\n[1] K. Chua, R. Calandra, R. McAllister, and S. Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems (NIPS), pages 4754–4765, 2018.\n\n[2] M. Janner, J. Fu, M. Zhang, and S. Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, 2019. \n\n\n> **Q3:** \"In Proposition 4.7, there is $P(s′|s,a)$ involved in the expression which can't be ignored since the $(s,a) \\sim d_{\\pi_2}$ and optimization variable involve $\\pi_2$ as well. it's not clear how this is handled as we will never know $P(s′|s,a)$, and how this boils down to simple NLL minimization is not explicitly described\"\n\nSorry for our insufficient explanation, we have refined it in the rebuttal revision.\n\nWe cannot obtain $\\pi_2$ directly because it is a suboptimal policy under $M_2$. At the time of optimization, $M_2$ is still not available, and $P(s'\\vert s,a)$ is not a priori knowledge, thus it is not practical to obtain the true $d^{\\pi_2}$. We agree on this point. \n\nDue to the impracticality of solving the optimization objective directly, we turn to design some approximation techniques in implementation. Below we describe the rationality of these designs in our implementation:\n\n* As inferred from the optimization objective, the minimization of the objective function can be achieved when we try to minimize the difference between $M_2$ and the real environment. To reduce model bias, we chose to use NLL as a loss function in our implementation, which has been shown an effective way to learn model dynamics. \n\n* Besides, we perform model learning on the current interaction tuples. The distribution mismatch indeed exists due to policy difference, this mismatch is somewhat tolerable (we explained it from the perspective of control theory in the Appendix). Off-policy reinforcement learning algorithms also adopt similar techniques by using existing interaction data for learning and optimization. ",
" > **Q4:** “ I see in the derivations on lines 90, 97, 128, etc. the proofs involve summation over s,a which are relevant for Tabular or Discrete settings but the experiments are for continuous state-action space which causes a mismatch and some of the proofs might break when everything is expressed for continuous state-action spaces.”\n\n* Corollary 4.8 (Line 90, 97): It is derived upon the generative model setting, so it indeed need the Tabular or Discrete settings. Note that it is only used to give a feasible example of Proposition 4.7 for understanding but not guide the algorithm directly. Our algorithm is free of discrete settings. \n* Lemma C.2 (Line 128): The proof here can turn to under the continuous state-action space setting. \n \n $ V^\\pi(\\mu) - V_M^\\pi(\\mu) \\geq -\\sum_{h=0}^{\\infty} \\gamma^h \\vert E_{s,a\\sim \\rho_h^\\pi(\\mu;P)}[r(s,a)] - E_{s,a\\sim \\rho_h^\\pi(\\mu;P_M)}[\\gamma^h r(s,a)] \\vert$\n$\\geq -\\sum_{h=0}^{\\infty} \\gamma^h \\int_{s\\in \\mathcal S}\\int_{a\\in {\\cal A}} R\\vert \\rho_h^\\pi(\\mu; P) - \\rho_h^\\pi(\\mu; P_M) \\vert da\\ ds$\n $ = -2R\\cdot \\sum_{h=0}^{\\infty}\\gamma^h \\frac{1}{2}\\int_{s\\in \\mathcal S}\\int_{a\\in {\\cal A}} \\vert \\rho_h^\\pi(\\mu; P) - \\rho_h^\\pi(\\mu; P_M)\\vert da\\ ds$\n\n $= -2R \\cdot \\sum_{h=0}^{\\infty} \\gamma^h {\\cal D}_{TV} (\\rho_h^\\pi(\\mu; P)\\Vert \\rho_h^\\pi(\\mu; P_M)) $\n\nWe choose the summation form for more friendly to readers. Besides, we follow the symbolic system of MBPO[1] partly, which also adopts a summation form for analysis and then experiments in continuous space. We checked our primary results will not be affected. We are pleased to discuss if there is something not well thought out. \n\n[1] Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, 2019.\n\n> **Q5:** \"Can you please illustrate how exactly is the volume of the of convex closure computed from the replay buffer?\"\n\nWe detailed how to compute convex closure from the replay buffer in Appendix Line 181-187. As for the convex hull, we first perform Principal Component Analysis on the states to reduce the dimension and then leverage the Graham-Scan algorithm to construct a convex hull of N points which are sampled from the replay buffer. \n\n> **Q6:** \"Can you please point me to the equation where you derive an estimation for the model shift as a product of volume × avg prediction error?\"\n\nOur proposed constraint estimation is a practical design, so that we call it \"practical overestimation\" in the paper. We have provided an explanation for the overestimation in Appendix D.5, paragraph \"Estimation on model shifts\". Besides, Appendix Figure 3 demonstrates that our prediction is higher than the true estimation and their trends stand consistent.\n\nAs discussed in Line 239-240, the constraint is based on an unobserved model $M_2$ so that we seek to construct a surrogate function and use current data to estimate it. More specifically, recall the constraint function $\\mathcal D_{TV}(P_{M_1}(\\cdot\\vert s,a)\\Vert P_{M_2}(\\cdot\\vert s,a)) \\leq \\sum_{s'\\in {\\cal S}}\\frac{1}{2}[\\vert P_{M_1}(s'\\vert s,a)- P(s'\\vert s,a)\\vert +\\vert P_{M_2}(s'\\vert s,a)-P(s'\\vert s,a)\\vert]$.\n\nAs the updated dynamics $P_{M_2}$ usually comes closer to the true dynamics $P$ than the previous one $P_{M_1}$, we can use $\\sum_{s'\\in {\\cal S}}\\vert P_{M_1}(s'\\vert s,a)- P(s'\\vert s,a)\\vert$ to estimate it. And we use the volume to estimate the summation space of $s'$. ",
" \n>**Q7:** \"Instead of going by the complicated event-triggered mechanism, if I simply try to obey the constraint in the optimization problem with a TRPO type regularized update in the model parameter space that should also do right? Given that we know that under the gaussian dynamics assumption, the total variation distance simply can be upper-bounded by the difference b/w the means and variances as well and hence it should be easy to be in the feasibility region.\"\n\nThanks for your proposal! We think that only applying the trust region approach is flawed. \nIf we only try to solve the pure constrained optimization problem in Proposition 4.7, methods such as trust region and Lagrange relaxation could be tried. \nHowever, these methods are problematic when viewed in the context of MBRL's general algorithmic process and properties:\nThe trust region approach directly optimizes the next model $M_2$, ignoring the inherently entangled nature of MBRL, and cannot guide policy exploration. \nLet's give two examples to illustrate this matter:\n\n* At the extreme, based on the $M_1$, even if we do not perform any exploration, we could still perform trust region optimization and get $M_2$, which is not expected.\n* In the other case, when we collect too many novel samples, optimizing $M_2$ too late will make $M_2$ unable to adapt to the new data distribution, thus resulting in data waste.\n\nThese two cases can be guided by the event-triggered approach. In addition, the event-triggered mechanism is not a very complex design, it is simple, effective, without additional training, and fits well with the MBRL process. We are happy to have further discussions if you have any other ideas and suggestions.\n\n> **Q8:** Can you please show the derivation on how you obtain line 47 in Appendix where you mention that with Lipschitzness of $V^\\pi_{M_1}$, $V^\\pi_{M_2}$ you derive the bounds for $\\vert G^\\pi_{M_1,M_2}(s,a)\\vert \\leq L\\cdot \\vert P_{M_2}(\\cdot\\vert s,a ) - P_{M_1}(\\cdot\\vert s,a)\\vert $.\n\nFrom the definition: $G^\\pi_{M_1,M_2}(s,a)= E_{{\\tilde{s}'}\\sim P_{M_2}(\\cdot\\vert s,a)}[V^\\pi_{M_2}({\\tilde{s}'})] - E_{s'\\sim P_{M_1}(\\cdot\\vert s,a)}[V^{\\pi}_{M_2}(s')]$.\n\n(1) Deterministic dynamics case: for clarity, we write $s' = M_i(s,a)$ instead of $s'\\sim P_{M_i}(s'\\vert s,a)$, then we rewrite $G_{M_1,M_2}(s,a)$ as: $G_{M_1,M_2}^\\pi(s,a) = V^\\pi_{M_2}(M_2(s,a)) - V^{\\pi}_{M_2}(M_1(s,a))$. \n\nWith the Lispchitzness, we have that $\\vert G^\\pi_{{M_1}, M_2}(s,a)\\vert \\leq L\\cdot \\vert M_2(s,a) - M_1(s,a)\\vert $.\n\n(2) Stochastic dynamics case: when $L \\geq \\frac{R}{1-\\gamma}$, we have \n\n$ \\vert G^\\pi_{M_1,M_2}(s,a)\\vert = \\vert E_{{\\tilde{s}'}\\sim P_{M_2}(\\cdot\\vert s,a)}[V^\\pi_{M_2}({\\tilde{s}'})] - E_{s'\\sim P_{M_1}(\\cdot\\vert s,a)}[V^{\\pi}_{M_2}(s')]\\vert $\n\n$= \\vert \\sum_{\\tilde{s}'\\in {\\cal S}}(P_{M_2}(s'\\vert s,a)-P_{M_1}(s'\\vert s,a))V_{M_2}^\\pi(s')\\vert $\n$\\leq \\vert \\max_{s'}V_{M_2}^\\pi(s') \\vert \\cdot \\vert P_{M_2}(\\cdot\\vert s,a ) - P_{M_1}(\\cdot\\vert s,a)\\vert \\leq L\\cdot \\vert P_{M_2}(\\cdot\\vert s,a ) - P_{M_1}(\\cdot\\vert s,a)\\vert.$\n\nHere, we have previously checked that the $L\\geq \\frac{R}{1-\\gamma}$ does not affect the results in our paper. \nSorry for skipping some details of the proof here, we have refined it in the rebuttal revision.\n\n> **Q9:** \"Additionally in line 48 in Appendix, you derive an upper bound for value function difference in-terms of the $|P_{M_2}(⋅|s,a)−P_{M_1}(⋅|s,a)|$. However, it has been shown in [1] that there is a dependence of the maximum reward R (and horizon $H$ which will be replaced by some factor of γ here) intermingled/ associated with the Lipschitz constant. Basically, they prove that in the Lipschitz assumption of $ \\Vert V1−V2\\Vert\\leq L\\Vert P1−P2\\Vert$, $L$ has a dependence on CHRmax. I think that should be applicable here as well, then how does the analysis gets affected? As now there is an additional dependence on the max reward that will be added. Can you please share your thoughts on the same?\n\nThanks for your exciting proposal! \n\nAs discussed in our response to A8, in the case of stochastic dynamics case, $L$ indeed has a dependence on $\\frac{R_{max}}{1-\\gamma}$. We agree that the analysis in [1] is applicable here as well. Our analysis is currently unaffected. As can be inferred from (R1), when $L$ varies, adjusting the threshold $\\sigma_{M_1,M_2}$ will enable (R1) to be feasible. \n\nWe thank the reviewer for pointing out this interesting work. We have added a discussion on the $L$ and cited this work in the rebuttal revision.\n\n[1]. Ying Fan and Yifei Ming. Model-based reinforcement learning for continuous control with posterior sampling. Proceedings of the 38th International Conference on Machine Learning, 2021.\n\nThanks again for reading our article carefully and giving very constructive suggestions. We hope that the above can resolve your concerns and we are glad to have further discussion.",
" Thank you for your comments and suggestions, the detailed responses regarding each problem are listed below. We hope to resolve the misunderstandings caused by the imperfection of our presentation. If you have any other questions, please post them and we are happy to have further discussions.\n\n> **Q1**: \"Figure 1, the performances on different tasks are capped at different timesteps. In several cases the learning curves have not stabilized yet, e.g. Walker2d and Swimmer. Could you please report the final performance where the learning curves stabilize or at 5M steps (as used to determine the “asymptotic performance” of the SAC and MBPO baselines).\"\n\nThank you for the comment. Actually, we did report the maximum average return (a kind of asymptotic performance) in Appendix Table 2. Results show that our method has comparable asymptotic performance in Walker2d (350k) and Swimmer (350k) environments. We will refine our description on this table in next revision for clarity.\n\nWe observed that MBRL baselines (MBPO, AutoMBPO) show convergence at 300k, thus we choose the 300k as the capped steps for the sake of fairness. Besides, CMLO also starts to converge around 300k. We have extended the plot by capping the curve at 350k in the rebuttal revision.\n\n>**Q2:** \"What is the y-axis in Figure 3(b)? Could you please include the policy coverage and prediction error results for some of the baselines for a comparison?\"\n\n* The y-axis in Figure 3(b) is the prediction error as shown in the caption.\n\n* Prediction error: We performed the prediction error comparison to the baseline MBPO in Appendix Figure 4. \n\n* Policy coverage: Policy coverage represents the exploration ability of the policy. The policy coverage increasing with the stages means that the policy has new explorations at every stage and may not fall into a local optimum. Per the reviewer's suggestion, we further report the numerical comparison to MBPO here. \n\n\n | Env | Algo | Stage1 | Stage2 | Stage3 | Stage4 | Stage5 |\n |-------------------|------------------|-------------------|------------------|-------------------|------------------|-------------------|\n | HalfCheetah | CMLO | 138.566126 | 182.281857 | 243.466268 | 302.816499 | 344.356213 |\n | HalfCheetah | MBPO | 129.251405 | 173.085743 | 242.492030 | 264.853917 | 338.555574 |\n | Ant | CMLO | 354.154379 | 744.91538 | 849.473640 | 876.119479 | 909.798043 |\n | Ant | MBPO | 342.134362 | 729.295456 | 821.658472 | 864.933838 | 880.252964 |\n\n Here, each stage $i$ contains $(60\\times(i-1), 60\\times i ]k$ steps. In HalfCheetah, we find that our policy achieves higher coverage especially in first 4 stages than MBPO. Consistently, we find that our policy enjoys higher performance, with an average return lead of about 1855.29 over MBPO in the first 300k steps. Likewise, the growth of policy coverage in Ant is also consistent with the rise in average return. The increase in policy coverage helps the policy to refrain from falling into a local optimum, thus improving performance.\n\nAgain, thanks for your suggestions, we have incorporated these comparison results into the rebuttal revision. \n\n> **Q3:** \"Figure 4 looks a bit confusing to me. Do the MBRL baselines start with an initial 30k steps exploration followed by model learning (inferred from Figure 4)? Why only showing 4k steps per stage instead of the whole 60k? What is the y-axis?\"\n\n* No, both CMLO and other MBRL baselines start with the initial steps within 5k steps.\n* For visual clarity, we arbitrarily chose 4k within each stage as an illustration, from which we can see how the trigger frequency varies with the stages. A clear display of 60k data requires a lot of space so we turn to only display 4k per stage. We have added the full 60k figure in the appendix of the rebuttal revision for clarity. \n* The y-axis is our estimation of the triggered condition (the detailed formula seen in Appendix 202-). ",
" > **Q4:** \"Figure 4(b) shows that the model shifts estimation jumps drastically from stage 1 to stage 2, and that it doesn’t hit the threshold at all in stage 1 (first 4k steps). Could the authors please explain this unexpected observation?\"\n\nSorry for the insufficient explanation of the results in Figure 4(b). The observation is indeed expected due to the Ant environment characteristics. In the initial stage, exploration in the Ant environment is quite restricted and localized, then the sampled tuples are highly repetitive, which in turn results in the low value of model error as the model has fitted well. While in the second stage, the agent undergoes an epiphany and more fresh data are collected, resulting in a jump in the average return, along with the model error increasing due to these novel data. This observation is consistent with the average return, we find the learning curves (both CMLO and baselines) smooth and rise limitedly in the initial stage (Fig. 1 Ant). \n\nAt the same time, this observation also supports our event-triggered mechanism, if we still perform frequent model updates when the sampling repetition is quite high, it is wasteful and may risk overfitting. (Line 250-253).\n\n> **Q5:** \"The absolute performance of TRPO on HalfCheetah and Ant tasks shown in Figure 5(a) are much lower than the other baseline results shown in Figure 1. The variance also seems quite high. Is it expected?\"\n\nYes, it is expected, because the absolute performance of Dyna-style model-based algorithms highly depends on its model-free part. For example, in Halfcheetah, purely model-free TRPO achieves the asytomptic performance 4000 around 8M steps. So when using TRPO as the policy optimization oracle, it is expected to get such performance and variance. Our baseline paper SLBO[1] also reported a similar observation when adopting TRPO in its Appendix figure 4.\n\n[1] Luo Y, Xu H, Li Y, et al. Algorithmic Framework for Model-based Deep Reinforcement Learning with Theoretical Guarantees[C]//International Conference on Learning Representations. 2018.\n\n> **Q6:** \"Why choosing these specific training step caps in Figure 5(b)?\"\n\nThis is not a specific pick. The performance in DKitty-Stand starts to converge at 1700 steps, while in Panda-Reaching starts to converge at 1000 steps, then we freely chose 2000 and 1500 steps as the training steps cap. \n\n> **Q7:** \"Could you briefly discuss the costs induced by estimating the volume of the convex closure, especially when applying to high-dimensional state space?\"\n\nActual computation time: compared to the unconstrained cases, our total training time increased by an average of 3.24h in the HalfCheetah environment (300k steps, 17-dimensional state space) and 3.96h in the Ant environment (300k steps, 27-dimensional state space). And our computing infrastructure and CMLO computational time was listed in Appendix Table 5. Thus, the additional costs are acceptable. \n\nWhen applying to the high-dimensional case, we detailed how to calculate state-space coverage in Appendix Line 181-187. We first sample $N$ tuples from the replay buffer and then perform Principal Component Analysis to reduce the dimension, then we leverage the Graham-Scan algorithm to construct a convex hull of these $N$ points, which only takes $O(N \\log N)$ for time complexity.\n\n> **Q8:** \"The paper mentioned briefly in the conclusion that finding the optimal threshold for each specific environment may be time consuming. Could you please elaborate on that? For example, how different are the thresholds for different environments and the amount of steps it takes for tuning.\"\n\nWe admit that $\\alpha$ is a hyperparameter that needs manually tuning. Compared with MBPO, we only add the tuning cost of $\\alpha$, yet we do not need to tune the fixed model training frequency in MBPO.\n\nIn Table 3 of the appendix, we presented the thresholds used in 6 environments. $\\alpha=2.0$ for Swimmer, HalfCheetah, and Ant, $\\alpha=2.5$ for Humanoid, $\\alpha=3.0$ for Walker2d, and $\\alpha=1.2$ for Hopper. \n\nTo further demonstrate the tuning process of this parameter, we present the tuning process for Humanoid and Ant environments as follows. \n\nHumanoid | | | | | \n----------|--------|-----------|-----------|---------\nalpha | 1.0 | 2.0 | 2.5 | 3.0 \nAverage Return | 5480.53 | 6402.78 | 6775.67 | 6348.92\n\n\nAnt | | | | | \n------------|---------------|------------|------------|---------\nalpha | 1.0 | 2.0 | 2.5 | 3.0 \nAverage Return | 5600.23 | 6810.42 | 6382.25 | 6523.86 \n\n\nThanks again for your comments. We will be continued to polish our statement for clarity in our revision. If you have any other questions, please post them and we are happy to have further discussions.",
" Thanks for your comments and suggestions. We provide clarification to your questions and concerns as below. If you have any additional questions or comments, please post them and we would be happy to have further discussions.\n\n> **Q1:** \"The authors propose a constraint estimation in their method to estimate the model shifts in Proposition 4.7. However, in Lines 245-247, the authors use the data from the current ensemble models and real environment to compute the constraint estimation, which is not consistent with Proposition 4.7.\" \n\nThere is probably some misunderstanding here. Our implementation is consistent with Proposition 4.7. We did give a detailed explanation on the consistency in Appendix D.5 (Line 243-255) and performed an experiment to visualize the connection between them in Figure 3 of the appendix. \n\nIt is consistent with Proposition 4.7 considering the general MBRL procedure because $M_2$ is trained based on the data from the current ensemble models and real environment (Line 235-238). More specifically, recall the constraint function \n$\\mathcal D_{TV}(P_{M_1}(\\cdot\\vert s,a)\\Vert P_{M_2}(\\cdot\\vert s,a)) \\leq \\sum_{s'\\in {\\cal S}}\\frac{1}{2}[\\vert P_{M_1}(s'\\vert s,a)- P(s'\\vert s,a)\\vert +\\vert P_{M_2}(s'\\vert s,a)-P(s'\\vert s,a)\\vert]$. As the updated dynamic $P_{M_2}$ usually comes closer to the true dynamics $P$ than the previous one $P_{M_1}$, we can use $\\sum_{s'\\in {\\cal S}}\\vert P_{M_1}(s'\\vert s,a)- P(s'\\vert s,a)\\vert$ to estimate it. \n\nMoreover, the results in Appendix Figure 3 demonstrate that our constraint estimation is higher than the true value and their trends stand consistent.\n\n\n> **Q2:** “The authors propose the event-triggered mechanism to determine when to update the model instead of the frequent model updating in MBPO[1]. However, the motivation to alleviate the model shifts by the event-triggered mechanism is unclear. The authors may want to explain why they introduce the event-triggered mechanism based on their proposed theoretical analysis. ”\n\nThere could be a misunderstanding. The event-triggered mechanism does not aim to alleviate model shifts but to detect model shifts and choose suitable occasions to train the model, as described in Line 249-253.\n\nAbout the motivation of the event-triggered mechanism:\n\n* The event-triggered mechanism is proposed to handle the constraint optimization problem of Proposition 4.7. Through it, we can decouple the constraint and objective of this intractable constraint optimization problem and solve them asynchronously, by detecting the model shifts constraint in the policy exploration stage and optimizing the objective function in the model training stage.\n* Besides, considering the general MBRL procedure, $M_2$ is updated based on newly collected data, so it is hard to satisfy (R2) condition when the distribution of these newly collected data differs drastically from that of training $M_1$. Thus it is natural to call off the policy exploration when the novelty of the newly collected data reaches the threshold. This is an intuitive explanation of our event-triggered condition. ",
" > **Q3:** “The advantages of the proposed event-triggered mechanism are unclear. The authors may want to explain why the event-triggered mechanism is more effective than previous methods [1, 2].”\n\nThanks for your suggestions. Let us first summarize the advantages of the event-triggered mechanism: \n\n* It is theoretically motivating which helps monotonicity, as detailed in A2 (response to Q2). Event-triggered mechanism focuses on the impact of model shifts. And drastic model shifts will hurt the analysis in previous work. \n\n* It is a simple, effective, and training-free method to solve the constrained problem in Proposition 4.7. \n * Compared to MBPO [1], it only introduces one hyperparameter, and it does not introduce additional training. \n\n * Compared to AutoMBPO [2], it does not need to construct a bilevel deep RL problem. So that it does not need to cost too much time in training.\n\nWe have reflected the above explanations to address the reviewer's concern in the rebuttal revision.\n\nBelow we provide some detailed analysis and comparison: \n\n* MBPO performs a fixed model training frequency. Our event-triggered mechanism instead provides a dynamically varying training frequency. The advantage of dynamically adjusting the model training interval has been demonstrated in Corollary 4.8. Besides, our event-triggered mechanism accounts for the impact of model shifts on performance, which is neglected in MBPO. We can infer from MBPO that $\\epsilon_m$ will be large when model shifts are too large, thus the monotonicity will be destroyed because it could hardly get an updated policy which increases the return under a certain model above $2R[\\frac{\\gamma^{k+1} \\epsilon_{\\pi}}{(1-\\gamma)^2} + \\frac{\\gamma^k \\epsilon_\\pi}{(1-\\gamma) }+ \\frac{k}{1-\\gamma}(\\epsilon_m')]$. \n* AutoMBPO is not meant to propose a practical MBRL algorithm but to do hyperparameter tuning. It did not touch on the model shifts issue. Although it can learn the model training frequency along with many other coupled hyperparameters, it is quite time-consuming and introduces more hyperparameters to tune in the outer RL. Our algorithm does not have such a heavy computational cost, for example, we spent an average 33.31h for Humanoid while AutoMBPO needs 245.33h. \n\n[1] Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, 2019.\n\n[2] Hang Lai, Jian Shen, Weinan Zhang, Yimin Huang, Xing Zhang, Ruiming Tang, Yong Yu, and Zhenguo Li. On effective scheduling of model-based reinforcement learning. In Advances in Neural Information Processing Systems, 2021.",
" \n\n> **Q4:** “The authors claim that the proposed constraint estimation is “the practical overestimation for the model shifts” in Lines 247-248. The authors may want to provide the theoretical analysis of the overestimation.”\n\nOur proposed constraint estimation is a practical design, so that we call it \"practical overestimation\" in the paper. We provided an explanation for the overestimation in Appendix D.5, paragraph \"Estimation on model shifts\". Besides, we peformed an experiment to show the overestimation in figure 3 of the appendix, which demonstrates that our prediction (unobserved $M_2$) is higher than the estimation (ground truth $M_2$) , their trends stand consistent.\n\n\n\n> **Q5:** \"Several definitions are missing, such as $\\sigma_{M_1, M_2}$ in Theorem 4.6 and $\\Delta {\\cal D}$ in Line 245.\"\n\nThanks for your kindly reminder. We have refined the missing definitions in the rebuttal revision.\n\n* $\\sigma_{M_1, M_2}$denotes the model shift constraint threshold between model $M_1$ and $M_2$.\n\n* We denote replay buffer as ${\\cal D}$, and $\\Delta {\\cal D}$ is the newly encountered data that expands the replay buffer.\n\n> **Q6:** \"The formulation for computing the proposed volume in Lines 239-244 is missing.\"\n\nWe presented the details for computing the proposed volume in Appendix D.3 (Line 181-187). It is a practical design. We first sample $N$ Tuples from the replay buffer, then perform Principal Component Analysis for dimension reduction and then leverage the Graham-Scan algorithm to construct a convex hull of these $N$ points. \n\n> **Q7:** \"The authors may want to provide the detailed motivation of Equation 5.1.\"\n\nEq 5.1 is the event-triggered condition motivated by Proposition 4.7. Proposition 4.7 implies that, when the model shift constraint boundary is violated according to the estimation, we need to pause data collecting and turn to solve the optimization objective. Besides, although turning to train models once not to violate the constraint is theoretically reasonable, we avoid doing so in practice because performing an update on data with a minor shift in coverage and distribution is wasteful and may risk overfitting (Line 250-253). \n\nWe further explain each item of Eq. 5.1 here. We adopt the fraction form $\\frac{vol(\\mathcal S_{ \\mathcal D_t \\cup \\Delta \\mathcal D(\\tau )})}{vol(\\mathcal S_{\\mathcal D_t})} \\cdot \\mathcal L(\\Delta \\mathcal D(\\tau)) $ for the triggered condition. Denominator $vol(\\mathcal S_{{\\mathcal D_t}\\cup \\Delta {\\mathcal D}(\\tau)})\\cdot {\\cal L}(\\Delta {\\mathcal D}(\\tau))$ is used to obtain an estimation for the model shift, as detailed in Line 239-248. The numerator $vol (\\mathcal S_{D_t})$, on the one hand, is to reduce numerical errors; on the other hand, this fraction reflects the relative change of the policy coverage and model shift if we turn to train $M_2$ under different $\\tau$ starting from $M_1$. This fraction reflects the current ability to digest new data. It can facilitate the setting of threshold, for we do not need to tune $\\alpha$ once the policy coverage updates.\n\nSorry for the unsufficient explanation on it. And we have refined the description in the rebuttal revision according to your suggestion.\n\n\n\n",
" \n\n> **Q8:** The experiment settings in Section 6.2 are missing, such as the settings of all methods in Figure 2 and that of the visualization in Figure 3. The authors may want to provide the detailed settings of all ablation studies.\n\nThanks for the suggestions, we have included the detailed settings of our ablation studies in the Appendix in the rebuttal revision. Note that other hyperparameters we do not mention below are the same as the hyperparameter settings in Appendix Table 3. \n\n* Figure 2 (a): We compare to three unconstrained cases (given fixed model training interval), the fixed intervals are shown in the caption (w/o-n), and the number n means how many newly real interaction tuples have been collected. These experiments are averaged over 5 random seeds. (Sorry for the typo in Ant figure legend w/o-100, it should be w/o-150. We will fix it later on.)\n* Figure 2 (b): It is used to present the triggers times during training of the experiments in (a) (Same environment in the same column figures). We compute the average triggered times over 5 random seeds per 10k steps. For clarity, only the mean values are shown here. And the y-axis represents the number of the model training times performed every 10k steps. Note that Figure 2(a) and Figure 2(b) share the legends. \n* Figure 3 (a): The stage $i$ represents $[60\\times(i-1), 60\\times i]k$ steps. For visualization, we firstly sampled 6k tuples from the replay buffer in each stage, then we performed Uniform Manifold Approximation and Projection (UMAP) to get a visualization of policy coverage (state-space coverage). The data used here are from w/-ours experiments in Figure 2. \n* Figure 3 (b): This figure shows the prediction error, ${\\cal L}(\\Delta{\\cal D}) = \\mathop{\\mathbb{E}}\\limits_{(s,a,s')\\in \\Delta{\\cal D}} \\big[\\frac{1}{K}\\sum\\limits_{i=1}^K \\Vert s' - \\hat{f_{\\phi_i}}(s,a) \\Vert\\big]$ (details in Appendix 188-190). The data used here are from w/-ours experiments in Figure 2. $K$ is the size of $\\Delta{\\cal D}$, or to say the time steps interval between model updates.\n\nFor other ablation studies, we also provide detailed settings as follows: \n\n* Figure 4: The y-axis is the estimation on our practical triggered condition (the detailed explanation and settings in Appendix 198-202), $\\sum_{i=0}^{[\\tau/F]} \\log \\Big(\\frac{vol(\\mathcal S_{\\mathcal D_t \\cup \\Delta \\mathcal D(Fi)})}{vol(\\mathcal S_{{\\mathcal D}_t})} \\cdot \\mathcal L(\\Delta \\mathcal D(Fi)) + \\beta \\Big) \\geq \\alpha$. The data used here are from w/-ours experiments in Figure 2. Here, $\\beta=1.0$, $F=20$ for Hopper, and $F=50$ for the other five benchmarks, as listed in Appendix Table 3.\n\n* Figure 5 (a): For model network settings, we adopt the same as present in Appendix Table 3. About Legend: w/o-n, we use a data sampler with batchsize=20, thus we get 20*n real interactions during the model training interval. We compute the total triggered times and scale them to [0,1], which is shown in the bar plots.\n\n * For the TRPO part, the key parameters are listed below:\n\n Ant: horizon = 1000, $\\gamma$=0.99, gae=0.97, step_size=0.01, iterations=40\n\n HalfCheetah: horizon = 1000, $\\gamma$=0.99, gae=0.95, step_size=0.01, iterations=40\n\n* Figure 5 (b):\n\n * For the dynamical models network: Gaussian MLP with 3 hidden layers of size 200, batch size is 64, and the learning rate is 0.0001. For the iLQR part: LQR_ITER=10, R=0.001, Q=1, horizon=5. For legend: w/o-n, we get n real interactions during the model training interval. And $\\alpha=0.5$ in w/-ours. We compute the total triggered times, and scale them to [0,1], which is shown in the bar plots.\n * DKitty-Stand: This environment is from the DKittyStandFix in ROBEL, the environment parameters are the same as the forward setting in the original setting, we modified the task horizon as $T = 50$.\n * Panda-Reaching: state space dimension 20, action space dimension 7, task description: Under the simulation conditions of Coppeliasim, the endpoint of the panda arm is required to reach a random target point in space from a fixed initial position. The reward is set as the negative of the $L_2$-norm distance between the current position of the end point and the position of the target point. Range of target points is $[1.05, -0.25, 1.1] \\times [1.2, 0.25, 1.4]$. And task horizon $T = 50$. \n\nFinally, we hope we resolve all of your concerns. We have refined our explanations in the rebuttal revision according to your suggestions. And we will be continued to polish our language and the clarity in our revision. Thanks again for your suggestions.",
" This paper first demonstrates that the model shifts---the difference between the updated model and the model before updating---hinder the monotonic improvement of model-based RL. To tackle this problem, the paper proposes CMLO, which introduces an event-triggered mechanism to determine when to alleviate the model shifts. Experiments show the effectiveness of the proposed method. Strengths:\n\n1. The authors propose the theoretical analysis to show that the model shifts hinder the monotonic improvement of model-based RL, which provides a useful perspective in model-based RL.\n\n2. Experiments demonstrate that the proposed method improves the performance and generalization of existing methods.\n\n\nWeaknesses:\n1. The authors propose a constraint estimation in their method to estimate the model shifts in Proposition 4.7. However, in Lines 245-247, the authors use the data from the current ensemble models and real environment to compute the constraint estimation, which is not consistent with Proposition 4.7.\n\n2. The authors propose the event-triggered mechanism to determine when to update the model instead of the frequent model updating in MBPO [1]. However, the motivation to alleviate the model shifts by the event-triggered mechanism is unclear. The authors may want to explain why they introduce the event-triggered mechanism based on their proposed theoretical analysis.\n\n[1] Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, 2019. My suggestions are as follows.\n\n1. The advantages of the proposed event-triggered mechanism are unclear. The authors may want to explain why the event-triggered mechanism is more effective than previous methods [1, 2].\n\n2. The authors claim that the proposed constraint estimation is “the practical overestimation for the model shifts” in Lines 247-248. The authors may want to provide the theoretical analysis of the overestimation.\n\n3. Several definitions are missing, such as $\\sigma_{M_1, M_2}$ in Theorem 4.6 and $\\Delta \\mathcal{D}$ in Line 245.\n\n4. The formulation for computing the proposed volume $vol(\\mathcal{S}_\\mathcal{D})$ in Lines 239-244 is missing.\n\n5. The authors may want to provide the detailed motivation of Equation 5.1.\n\n6. The experiment settings in Section 6.2 are missing, such as the settings of all methods in Figure 2 and that of the visualization in Figure 3. The authors may want to provide the detailed settings of all ablation studies.\n\n[1] Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, 2019.\n\n[2] Hang Lai, Jian Shen, Weinan Zhang, Yimin Huang, Xing Zhang, Ruiming Tang, Yong Yu, and Zhenguo Li. On effective scheduling of model-based reinforcement learning. In Advances in Neural Information Processing Systems, 2021.\n Yes, the authors adequately addressed the limitations and potential negative societal impact. ",
" This paper studies how to ensure optimization monotonicity of learning an accurate dynamics model for MBRL. They derive a lower bound for the derived policy performance improvement that depends on the one-step dynamics prediction error of the current model, constraint by the model shift. \n\nInspired by the theory, they propose an algorithm to dynamically alternate between policy exploration and model learning, with the aim to improve optimization monotonicity. They evaluate the proposed algorithm on a series of MoJoCo control tasks and compared the results against a few model-free baselines, and show that their model-based method is able to reach the same SOTA asymptotic performance while being more stable and sample efficient. Strength:\n\nThe problem studied in the paper is important and relevant to practice. The paper is well-written and easy to follow. The proposed method is theoretically motivated.\n\nWeakness:\n\nMy main concerns are around the experiments and potential limitations. Please see the section below for detailed comments. 1. In Figure 1, the performances on different tasks are capped at different timesteps. In several cases the learning curves have not stabilized yet, e.g. Walker2d and Swimmer. Could you please report the final performance where the learning curves stabilize or at 5M steps (as used to determine the “asymptotic performance” of the SAC and MBPO baselines).\n2. What is the y-axis in Figure 3(b)? Could you please include the policy coverage and prediction error results for some of the baselines for a comparison?\n3. Figure 4 looks a bit confusing to me. Do the MBRL baselines start with an initial 30k steps exploration followed by model learning (inferred from Figure 4)? Why only showing 4k steps per stage instead of the whole 60k? What is the y-axis?\n4. Figure 4(b) shows that the model shifts estimation jumps drastically from stage 1 to stage 2, and that it doesn’t hit the threshold at all in stage 1 (first 4k steps). Could the authors please explain this unexpected observation?\n5. The absolute performance of TRPO on HalfCheetah and Ant tasks shown in Figure 5(a) are much lower than the other baseline results shown in Figure 1. The variance also seems quite high. Is it expected?\n6. Why choosing these specific training step caps in Figure 5(b)?\n7. Could you briefly discuss the costs induced by estimating the volume of the convex closure, especially when applying to high-dimensional state space? The paper mentioned briefly in the conclusion that finding the optimal threshold for each specific environment may be time consuming. Could you please elaborate on that? For example, how different are the thresholds for different environments and the amount of steps it takes for tuning. ",
" The paper focuses on the monotonic improvement for model-based reinforcement learning which is an extremely important problem due to the inherently entangled nature of the multi-level optimization problem - policy optimization and model learning. Earlier research has not specifically considered the model shift which is considered in this research while proving the monotonic improvement. The primary objective is to show that $||V^{M1|\\pi_1} - V^{M2|\\pi_2}|| \\geq C$ which can guarantee monotonic improvements under the updating dynamics. The primary reason for the model bias is a mismatch between the samples in the model learning stage and the policy optimization stage. To tackle the same, they formulate a constrained bi-level optimization framework for the MBRL problem and design an event-triggered strategy to decide when to update the model to guarantee monotonic improvement under changing dynamics. Empirical results show some improvements in sample efficiency from prior stable model-based RL methods and the ablation study supports the main hypothesis of the paper. \n The primary strength of the paper lies in the formulation of the bi-level constrained optimization objective from the lower bound objective with an event-triggered mechanism under a generative model assumption to update the model which is quite novel according to me. Earlier research has addressed the bias in the model learning and has also tried to guarantee monotonic improvement, but this paper explicitly talks about the monotonic improvement of the policy under shifted model dynamics and shows guarantees of monotonic improvements (with certain assumptions) which is interesting and novel. The biggest challenge in guaranteeing a monotonic improvement in the policy is due to the changing dynamics which might occur due to a potential mismatch between the true trajectories and model-generated trajectories and is indeed an important challenge to overcome. The paper breaks down the source of error into 2 components with the assumption that we can always get an epsilon optimal policy under a given model which is a bit optimistic but a very common and frequently used assumption in RL. The components being 1. inconsistency gap between the model and the environment : $E_{s,a \\sim d_{\\pi}} TV(P(\\cdot|s,a) || P_M(\\cdot|s,a))$ and 2. Optimal returns under the true model and they hypothesize that the performance difference $||V^{M1|\\pi_1} - V^{M2|\\pi_2}||$ is lower bounded by the above two aspects. Finally, with the above notions, the authors formulate a constrained lower bound optimization problem as stated in Theorem 4.6 which is quite novel as the formulation of model-based RL as a bi-level optimization problem is quite natural as done in this paper. The strength of the paper lies in designing an event-triggered mechanism and providing a high probability bound for the value of $k$ for guaranteeing monotonic improvement under the generative model assumption. In Corollary 4.8, the authors derive a relation between the model training interval $k$, model bias, and state-space coverage which is quite unique to my knowledge and given by $k \\propto \\frac{2 \\times vol(S)}{\\epsilon^2}$ (just approximated and ignore other terms). In other words, when the model bias is less one needs to increase $k$ to increase state action coverage which is quite intuitive. Overall the ablation study seems interesting and the experimental results show some improvements over the past SOTA model-based RL methods.\n\nThe primary weakness lies in various assumptions that have been made to derive some of the primary results that are not very general. To begin with, the main result in Corollary 4.8 is shown with a generative model assumption which is quite restrictive. The primary novelty of the research from a solution perspective lies in designing the event-triggered mechanism and the theory is shown with a generative model assumption also with linear\nquadratic regulator (which I still believe can be relaxed) and not for a general scenario. Secondly, it is not very clear how minimization of the constrained objective function in Proposition 4.7 turns out to be simple negative likelihood minimization. In Proposition 4.7, there is $P(s'|s,a)$ involved in the expression which can't be ignored since the $(s,a) ~ d^{\\pi_2}$ and optimization variable involve $\\pi_2$ as well. So, it's not clear how this is handled as we will never know $P(s'|s, a)$, and how this boils down to simple NLL minimization is not explicitly described. Also, I see in the derivations on lines 90, 97, 128, etc. the proofs involve summation over s,a which are relevant for Tabular or Discrete settings but the experiments are for continuous state-action space which causes a mismatch and some of the proofs might break when everything is expressed for continuous state-action spaces. The ablation study is interesting and the experimental results show improvement but not significant improvements. However, I feel the work addresses an important question and it might be helpful to understand this aspect for model-based RL. 1. Can you please illustrate how exactly is the volume of the of convex closure computed from the replay buffer? \n\n2. Can you please point me to the equation where you derive an estimation for the model shift as a product of volume $\\times$ avg prediction error?\n\n3. Instead of going by the complicated event-triggered mechanism, if I simply try to obey the constraint in the optimization problem with a TRPO type regularized update in the model parameter space that should also do right? Given that we know that under the gaussian dynamics assumption, the total variation distance simply can be upper-bounded by the difference b/w the means and variances as well and hence it should be easy to be in the feasibility region.\n\n4. Can you please show the derivation on how you obtain line 47 in Appendix where you mention that with Lipschitzness of $V^{\\pi}_{M_1} V^{\\pi}_{M_2}$ you derive the bounds for $|G^{\\pi}_{M_1, M_2} (s,a)| \\leq L |P_{M_2}(\\cdot|s,a) - P_{M_1}(\\cdot|s,a)|$. \n\n5. Additionally in line 48 in Appendix, you derive an upper bound for value function difference in-terms of the $|P_{M_2}(\\cdot|s,a) - P_{M_1}(\\cdot|s,a)|$. However, it has been shown in [1] that there is a dependence of the maximum reward $R$ (and horizon $H$ which will be replaced by some factor of $\\gamma$ here) intermingled/ associated with the Lipschitz constant. Basically, they prove that in the Lipschitz assumption of $||V1-V2|| \\leq L ||P1- P2||$, $L$ has a dependence on $CHR_{max}$. I think that should be applicable here as well, then how does the analysis gets affected? As now there is an additional dependence on the max reward that will be added. Can you please share your thoughts on the same?\n\nReferences : \n[1]. Ying Fan and Yifei Ming. Model-based reinforcement learning for continuous control with posterior sampling. In Marina Melia and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 3078–3087. PMLR, 18–24 Jul 2021. The author mentions some points regarding the current applicability to certain environments and wants to scale with improved optimization methods which are sensible.",
" This paper studies the relationship between the shift brought by model updates and policy performance. The authors proposed a model shift constraint and the CMLO algorithm for a monotonic improvement guarantee. pros: 1. The authors consider an important problem in RL, the inconsistency of policy updates under shifted models. The main results and proofs are largely correct in my view.\n\n2. The ablations verify the proposed algorithm.\n\ncons: 1. The monotonic results are not novel in MBRL, for example, the authors cite DPI [1], which established similar monotonic results. But the authors claim that previous works \"characterize the monotonicity in terms of a fixed model of interest\". I don't see the reason for such a claim and the exact advantage of CMLO.\n\n2. From the introduction and related work, the partial reasons for cons 1 might include that the authors derive the theorems from a global optimal view. But I can't judge the necessity of doing this. E.g., in theorem 4.5, why do we care about the *optimal* value under a model? Notably, the monotonic property rather than global optimality is of interest. Therefore, according to the performance difference lemma, shouldn't we focus more on the value-under-model evaluated using the *current policy*?\n\n3. Another confusion I have is the notation of state coverage. As it is introduced from a global point of view, I can't judge its necessity before understanding the necessity of the global view. Besides, it is better to include the exact definition of state coverage in the context of RL, so that we can see why it is estimated with the replay buffer. \n\n4. The intuition behind the event-triggered equation 5.1 is also not very clear. What will the fraction of state coverage give?\n\nMinor: how much additional time does it cost for estimating model shifts?\n\n[1] Dual Policy Iteration, Wen Sun et al. 1. It would be better to have more comparisons between CMLO and previous works (e.g. DPI). Although they have different proof structures (from the local and global view?), the monotonic improvement results are similar in my view. \n\n2. Why do we care about the *optimal* value under a model and what is the advantage compared to the local view? I am assuming that it is the primary reason that leads to different algorithms between CMLO and previous works.\n\n3. The definition of state coverage and the event-triggered equation is not so clear.\n\nMy concerns are closely related. So it might be the case that I missed something important. I would like to change my score depending on the authors' rebuttal. No."
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
5
] | [
"yJSAXbpbIFZ",
"v22RSWkON3p",
"9p2kgfusP0R",
"0JpPFh_kT6U",
"-kX-Fc2HiaQ",
"vUxm3lzapf",
"ByB5gJlmnZ",
"XLRFOT2Rk_U",
"2oNSoPueuVvE",
"2oNSoPueuVvE",
"2oNSoPueuVvE",
"5bxM80s2Ip5",
"CBCavhzBf0",
"qrfp7qoLFZK",
"k54_3pS64ui",
"4Tzy8Q7gZxCs",
"cUL37YAAdzw",
"0JpPFh_kT6U",
"yJSAXbpbIFZ",
"ACK1veiIHN4",
"ACK1veiIHN4",
"gpEG3gTGBoC",
"X2m43Ei1ucL",
"ACK1veiIHN4",
"ACK1veiIHN4",
"ACK1veiIHN4",
"ACK1veiIHN4",
"ACK1veiIHN4",
"Uyxxl36GoST",
"Uyxxl36GoST",
"Uyxxl36GoST",
"Uyxxl36GoST",
"gpEG3gTGBoC",
"gpEG3gTGBoC",
"X2m43Ei1ucL",
"X2m43Ei1ucL",
"X2m43Ei1ucL",
"X2m43Ei1ucL",
"nips_2022_9a1oV7UunyP",
"nips_2022_9a1oV7UunyP",
"nips_2022_9a1oV7UunyP",
"nips_2022_9a1oV7UunyP"
] |
nips_2022_J0nhRuMkdGf | Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees | Variational inequalities in general and saddle point problems in particular are increasingly relevant in machine learning applications, including adversarial learning, GANs, transport and robust optimization. With increasing data and problem sizes necessary to train high performing models across various applications, we need to rely on parallel and distributed computing. However, in distributed training, communication among the compute nodes is a key bottleneck during training, and this problem is exacerbated for high dimensional and over-parameterized models. Due to these considerations, it is important to equip existing methods with strategies that would allow to reduce the volume of transmitted information during training while obtaining a model of comparable quality. In this paper, we present the first theoretically grounded distributed methods for solving variational inequalities and saddle point problems using compressed communication: MASHA1 and MASHA2. Our theory and methods allow for the use of both unbiased (such as Rand$k$; MASHA1) and contractive (such as Top$k$; MASHA2) compressors. New algorithms support bidirectional compressions, and also can be modified for stochastic setting with batches and for federated learning with partial participation of clients. We empirically validated our conclusions using two experimental setups: a standard bilinear min-max problem, and large-scale distributed adversarial training of transformers. | Accept | Dear Authors,
We had a long discussion about this paper. Overall, the reviews are positive. Several reviewers raised their scores after the rebuttal phase, and they found the response by the authors satisfactory.
However, there were some concerns about the novelty of the paper that I summarize here:
This paper combines some standard techniques and ideas from decentralized optimization and minimax optimization to obtain the presented results. Hence, the algorithmic novelty of the paper is limited. Perhaps the major contribution of the paper is in the vector that they decide to quantize, but still, the main idea of the paper is very similar to the single-loop variance reduction techniques that were first proposed in stochastic optimization and later used for distributed optimization. The main theoretical challenge that the authors had to face was combining quantization with the Extra Gradient method as highlighted in the first paragraphs of section 4.1. Indeed, similar quantization ideas have been extensively studied in the distributed optimization literature and thus the algorithmic novelty seems to be very limited. Similarly, excluding the aforementioned challenge (Extra Gradient + compression) the derivation of the theoretical results appears to be tedious but based on standard techniques.
Considering the above points, the AC and one of the reviewers found the paper below the bar as its novelty is limited. However, four reviewers voted in favor of accepting this paper, as they believe the technical novelty of the paper and its proof techniques are significant enough.
I respect the majority vote and hence recommend this paper to be accepted. | val | [
"z067atOXxop",
"R8M4qH8bfk0",
"1d-4xGTMqBU",
"sDw1E5xEF02",
"EThTETUDzds",
"tOrktJC477n",
"9uKJabbbqdQI",
"CtFXk5J9F8",
"we7XXlFhwA",
"RG0VVnlZwI6",
"1CKiisvjocm",
"lpm6CTA9bXs",
"feurdYz8Ri8",
"LcajO32Xy0H",
"CiovS66XRP4",
"o6KR5q9_xfz",
"Zq3zWYlcn5_",
"6CfERbe3clX",
"QSGYF4U2ppB",
"IIQ5LeAi990"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are grateful for raising the score! Thanks again for the review, response, important comments, and positive final feedback!",
" Thanks again for your thorough response. I think all my questions are resolved, and I decide to raise my score to 7.",
" We thank Review **1QcY** for the response. See answers below.\n\n> **Following the authors' response, in Line 220-221 of the revised paper $w^{k+1} = z^{k+1}$ should be $w^{k+1} = z^k$. Indeed, this is what led me to think that there is a typo in Algorithm 1 in the first place.**\n\nThanks very much! We fixed! See the new revision.\n\n> **I notice that the authors consider VIs in the unconstrained setting as shown in (3). On the other hand, in the example of adversarial training of Transformers we are dealing with a constrained min-max problem. Could the authors please comment on the possibility of extending the results to the constrained setting?**\n\n1) For unbiased compressors, we say yes. Moreover, we checked all the proofs, and we can guarantee that the constrained setting can be done as well. For contractive compressors, the question is more tricky. We cannot guarantee it. Moreover, we have analyzed the literature on contractive compressors and on error compression techniques for minimization problems, and even for them the authors usually consider unconstrained setting [1,2,3,4,5,6,7]. Therefore, this is an interesting area for research not only for VIs/SPPs, but also for minimizations.\n\n2) The problem from Section 5.2 (Transformer training) can be explored separately. This is due to the fact that the variables $\\rho_n$ are unique for each data sample, they are stored locally and are not transmitted to the server (unlike the weights of the model $w$). It turns out that compression and error compensation are needed only for the minimization variable $w$. The variable $w$ is unconstrained, while the variables $\\rho_n$ are constrained. It seems to us that for this particular problem, we can analyze the constrained setting.\n\n\n[1] Sebastian U Stich and Sai Praneeth Karimireddy. The error-feedback framework: Better rates for sgd with delayed gradients and compressed communication.\n\n[2] Xun Qian, Peter Richtárik, and Tong Zhang. Error compensated distributed sgd can be accelerated\n\n[3] Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, and Mher Safaryan. On biased compression for distributed learning.\n\n[4] Peter Richtárik, Igor Sokolov, and Ilyas Fatkhullin. EF21: A new, simpler, theoretically better, and practically faster error feedback.\n\n[5] Ilyas Fatkhullin, Igor Sokolov, Eduard Gorbunov, Zhize Li, and Peter Richtárik. Ef21 with bells & whistles: Practical algorithmic extensions of modern error feedback.\n\n[6] Hanlin Tang, Chen Yu, Xiangru Lian, Tong Zhang, and Ji Liu. Doublesqueeze: Parallel stochastic gradient descent with double-pass error-compensated compression.\n\n[7] Shuai Zheng, Ziyue Huang, and James Kwok. Communication-efficient distributed blockwise momentum sgd with error-feedback.",
" I thank the authors for the detailed responses. Now I see that MASHA and the variance-reduced FBF method in Alacaoglu and Malitsky's paper do differ in some ways and follow different lines of proof, though they share some similar high-level ideas. Below are a few further remarks:\n\n- Following the authors' response, in Line 220-221 of the revised paper $w^{k+1}=z^{k+1}$ should be $w^{k+1}=z^k$. Indeed, this is what led me to think that there is a typo in Algorithm 1 in the first place.\n- I notice that the authors consider VIs in the unconstrained setting as shown in (3). On the other hand, in the example of adversarial training of Transformers we are dealing with a constrained min-max problem. Could the authors please comment on the possibility of extending the results to the constrained setting?",
" We greatly thank Reviewer **CwZ6** for the response, important comments, and positive final feedback!\n\nOf course we will include everything discussed with Reviewer in the final version, if we have extra space. ",
" Thank you for responding to my concerns. I am satisfied with the responses and have increased my score to 6. My only suggestions is that, if accepted, the paper should contain the clarifications you provided above regarding the adaptation of MASHA for adversarial training of Transformers since these points will make the scope and limitations of that experiment much clearer.",
" With this message, we would just like to kindly remind Reviewers that we would be happy if Reviewers would participate in the rebuttal discussion process. We are looking forward to hearing from Reviewers **9vze**, **CwZ6**, **1QcY** and **SxRG**. We thank Reviewer **X46K** for the responses to the rebuttal.",
" We are very grateful for the raised score! We again want to say thanks for the very attentive reviewing of our paper, especially the work with the text!",
" I appreciate the efforts of the authors to address my concerns. I find that after revision the paper is written much more carefully and succinctly and the main results and ideas are conveyed with clarity (changes have been made throughout the whole paper and not only in the parts written in blue). Some important points are the description of Algorithm 1 as well as its comparison with uncompressed techniques which are now much better articulated. Further, the experiment section is now more carefully presented. In my opinion, the only weakness of the current version of the paper is that the novelty of their algorithms is somewhat limited. After carefully considering the reviewers' comments, the authors' responses and the latest version of this paper I have decided to improve my overall score to 6 \"Weak Accept\"(Soundness: 3 good, Presentation: 3 good, Contribution: 3 good).",
" We thank Reviewer **SxRG** for the work, and for the appreciation of our paper. \n\nWe corrected all typos in the revision. If Reviewer has any further questions or comments on improving the paper, we will be happy to answer them.\n",
" We thank Reviewer **1QcY** for the work! We are pleased that Reviewer appreciated our paper. We further provide answers to the questions and comments that Reviewer noted.\n\n> **Connections of MASHA and [1]** (here [1] is Alacaoglu and Malitsky's paper)\n\nAs Reviewer correctly pointed out ( we point out it in line 207 (208 in the revision) and line 949 (995 in the revision) too), the idea of our method is related to the work [1]. But we borrow the momentum technique from there, everything else is either new facts or the use of already classical ones. \n\n1) We have our own version of the proof. Please, compare for example Lemma 2.2 of [1] and our proof. \n\n2) An interesting point: Reviewer noted that we have a typo in Algorithm 1: $w_{k+1}=z_{k}$. This is not a typo. Indeed in [1] the authors use $w_{k+1}=z_{k+1}$, but it is more convenient for us in our algorithm/proof to use $w_{k+1}=z_{k}$. This is most likely due to the fact that our and Alacaoglu and Malitsky's proofs follow several different ways. \n\n3) Reviewer also pointed out Lemma 2.4 from [1]. It is quite classical, including in [1] the authors give a reference where it first appeared [2] (more than 10 years ago).\n\n4) We give an analysis of Algorithm 1 and Algorithm 2 in the non-monotone case, which is not done in [1].\n\n5) The analysis of Algorithm 2 is more complicated than the analysis of Algorithm 1 and the analysis of algorithms from [1]. This is due to the use of additional sequences for error feedback.\n\n> **Minor issues**\n\nThanks! We have fixed all in the revision, but $w_{k+1}=z_k$ is not a typo (see above).\n\n[1] Ahmet Alacaoglu and Yura Malitsky. Stochastic variance reduction for variational inequality methods.\n\n[2] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming.\n",
" We thank Reviewer **CwZ6** for the work! We are pleased that Reviewer appreciated our paper. We further provide answers to the questions and comments that Reviewer noted.\n\n> **The experiment on Adversarial Training of Transformers is not clear at all.** Specifically it is unclear if any of the MASHA approaches have been implemented for this since the plots appear to contain only the baselines. Moreover the size of the plots, and the font size of the text in them is extremely small and thus it is very hard to tell what is going on in them.\n\n1) We have changed the size of figures and captions - see the revision.\n\n2) The main goal of the second experiment is to show that for the large distributed VIs/SPPs, compression can play an important role. \n\n3) As an optimizer, we use LAMB + MASHA. From MASHA in this optimizer there is a negative momentum (in MASHA $z^{k+1/2} = z^k - (1 - \\tau) ( z^k - w^k) - \\gamma F(w^k)$, here $- (1 - \\tau) ( z^k - w^k)$ is a negative momentum), as well as a compression and an error compensation technique. From LAMB (Adam type method) we add adaptivity/scaling to MASHA. \n\n4) Reviewer is right that we don't exactly use MASHA. But it is a popular tendency in recent years in papers about VIs and SPPs, to make a theory for one method (without scaling/adaptivity), and then somehow modify Adam based on this theory. See papers [15, 26, 58, 13, 49] in our literature list. \n\n5) We noted for ourselves that more recently (e.g., after the deadline for this NeurIPS 2022 conference) there have been papers that attempt to analyze methods with scaling/adaptivity, like Adam, for SPPs and VIs. Then this would be an interesting direction for future research to connect MASHA and LAMB in theory. But it would be hard to get some additional facts into this paper. \n\n6) Here we can only say that MASHA is good for monotone problems and really gives a win. And LAMB + MASHA are good for huge transformers.\n\n> **It is not clear if MASHA1 will always reduce the communication cost.** Corollary 1 appears to show that MASHA1 will have fewer iterations than the uncompressed case, but each round of iteration can involve upto 2 rounds (each way) of communication between the server and the clients. As the authors state in line 235, the average communication cost per round is $(1/\\beta+1−\\tau)$ times the communication cost in the uncompressed case. It is not clearly stated if $(1/\\beta+1−\\tau)$ is smaller or larger than 1. If it is larger, then the communication cost would be higher than that of the uncompressed case from what I can tell.\n\n1) In line 235 (in the first version of the paper, in the revision it is line 238) we state that the average communication cost $(1/\\beta+1−\\tau)$ per iteration. \n\n2) One can note that the Extra Gradient method (equality (6) after line 193 in the revision) has the communication cost $(1+1)$ per iteration, because we need to transfer the uncompressed operator $F$ twice per iteration. If we compare $(1/\\beta+1−\\tau)$ for MASHA1 and $(1+1)$ for Extra Gradient, we see that MASHA1 wins for any $\\beta$ and $\\tau$, since $\\beta \\geq 1$, $0\\leq \\tau \\leq 1$ by definition. \n\n3) If we want to compare compressed MASHA1 with uncompressed MASHA1 ($\\beta = 1$), then one can note that uncompressed MASHA1 has the average communication cost $(1+1−\\tau)$ per iteration. We can see that $(1/\\beta+1−\\tau) \\leq (1+1−\\tau)$ for any $\\beta$, because $\\beta \\geq 1$ by definition. \n\n4) Reviewer asked to compare $(1/\\beta+1−\\tau)$ and $1$. In Corollary 4.1 we choose $1−\\tau = 1/\\beta$, then we need to compare $2/\\beta$ and $1$. $2/\\beta$ can sometimes be greater than $1$, but for practical compression operators usually $\\beta >> 1$, and $2/\\beta < 1$.\n\n> **Please include a conclusion section in the paper.**\n\nWe have included, see the revision of our paper. For convenience, we give it here \n\nIn this paper we present algorithms with unbiased and contractive compressions for solving distributed VIs and SPPs. Our algorithms are presented in deterministic, stochastic and federated versions. All basic algorithms and their modifications support bidirectional compression. Experiments confirm the efficiency of both our algorithms and the use of compression for solving large-scale VIs in general.\nIn future works it is important to address the issue of the necessity to forward uncompressed information in some iterations. Although full packages are rarely transmitted, this is a slight limitation of our approach. Lower bounds for compression methods are also an interesting area of research. At the moment there are neither such results for VIs and SPPs, nor for minimizations. In Appendix C we only hypothesize the optimality of our methods and back it up with analogies, provable lower estimates could complete the story with compressed methods.\n\n> **The notation for local operators $F_m$ is used in Section 2.3 before it has been defined.** Please define notation before using it.\n\nThanks, we have fixed it in the revision!\n",
" We thank Reviewer **X46K** for the work! We are glad that Reviewer rated the contribution of our paper as “good”. \nWe further provide answers to the questions and weaknesses that Reviewer noted. We hope we were able to solve the main problems.\n\n> **Firstly, the writing of the paper requires a major revision.** There are many typos, syntactic errors and sentences that do not make sense throughout the main body and the appendix. As a result many important points of the paper are not clearly conveyed such as the description of Algorithm 1. Further, the appendix needs also to be rewritten more carefully to make it easier to follow. Below I state a few examples that could be revised\n\nWe are especially grateful to Reviewer for the work with the text of our paper. The comments and issues on the text have helped to improve our work. Please could Reviewer take a look at the revision of our article and see if Reviewer likes the changes? We have tried to solve all the troubles.\n\n> **While the theoretical results are extensive the originality and technical contribution appears to be limited.** Similar results have been obtained with the same compression operators in optimization problems and certain techniques are repetitive.\n\n1) In Sections 4.1 and 4.2, we try to discuss that the new methods are not easy to obtain. In particular, known methods for minimization problems do not give the necessary result. Simple modifications of the Extra Gradient method do not give too. Our method is based on the negative momentum idea from [1] about non-distributed problems, but we give a different analysis, adding the non-monotone case (which is not in [1]). Moreover, Algorithm 2 is much more difficult to analyze than Algorithm 1 and algorithms from [1] because of the presence of the error compensation sequences.\n\n2) Reviewer **1QcY** also noted this (one can see Reviewer **1QcY**‘s comment and our response to it in the corresponding review). But Reviewer **1QcY** gave 6.\n\n[1] Ahmet Alacaoglu and Yura Malitsky. Stochastic variance reduction for variational inequality methods.\n",
" We thank Reviewer **9vze** for the work! We are very sorry that Reviewer couldn't spend much time on our paper, but Reviewer helped us make our paper better. \n\nOther Reviewers rate our contribution as “good” or “excellent”. We ask Reviewer not to judge our paper harshly because of typos and captions size. We have corrected all these problems in the revision. In more detail below.\n\n> **style violations.** The authors greatly reduce the font-size of their figure captions, which as far as I am aware breaks the style rules, allowing them to fit more content into the 9 pages. This may sound a bit harsh, but I think that this is unfair to other authors who follow the rules and I would advocate for rejecting the paper for this reason.\n\nOur paper revision is now 9 pages long, but we have returned the caption size to normal.\n\n> **the paper has not been properly proof-read.** Basic aspects of the technical introduction contain errors. For instance, the saddle point problem is introduced as min-min on line 158 when as far as I am aware it should be min-max. Also on line 221, the symbol q^serv is used twice instead of the proper notation.\n\nWe have uploaded the revision of the paper in which the typos have been corrected. We have also tried to make an independent proof-read.\n\n> **exposition.** The exposition could be much clearer. For instance at line 154, no supporting intuition is offered about the mathematical form of the VI inequality, making it hard for an unfamiliar reader to build a working understanding of this topic.\n\nIn the revision, we gave additional information about this. In line 157 there is a link to Appendix B. In this section we give examples of VIs formalism breadth. For the convenience, we include it here\n\n**Example 1 [Minimization]** Consider the minimization problem:\n\\begin{align}\n\\min_{z \\in R^d} f(z).\n\\end{align}\nSuppose that $F(z) = \\nabla f(z)$. Then, if $f$ is convex, it can be proved that $z^* \\in R^d$ is a solution for the VI problem if and only if $z^* \\in R^d$ is a solution for the minimization problem. And if the function $f$ is non-convex, then $z^* \\in R^d$ is a solution for the VI problem if and only if $\\nabla f(z^*) = 0$, i.e. $z^*$ is a stationary point.\n\n\n**Example 2 [Saddle point problem]** Consider the saddle point problem:\n\\begin{align}\n\\min_{x \\in R^{d_x}} \\max_{y \\in R^{d_y}} g(x,y).\n\\end{align}\nSuppose that $F(z) = F(x,y) = [\\nabla_x g(x,y), -\\nabla_y g(x,y)]$ and $Z = R^{d_x} \\times R^{d_y}$. Then, if $g$ is convex-concave, it can be proved that $z^* \\in Z$ is a solution for the VI problem if and only if $z^* \\in Z$ is a solution for the saddle point problem. And if the function $g$ is non-convex-non-concave, then $z^* \\in Z$ is a solution for the VI problem if and only if $\\nabla_x g(x^*, y^*) = 0$ and $\\nabla_y g(x^*, y^*) = 0$, i.e. $z^*$ is a stationary point.\n\nIf minimization problems are widely researched separately from variational inequalities. The study of saddle point problems often is associated with variational inequalities, therefore saddle point problems are strongly related to variational inequalities. \n\n**Example 3 [Fixed point problem]**\nConsider the fixed point problem:\n\\begin{align}\n \\text{Find} ~~ z^* &\\in R^d ~~ \\text{such that} ~~\n T(z^*) = z^*,\n\\end{align}\nwhere $T: R^d \\to R^d $ is an operator. With $F(z) = z - T(z)$, it can be proved that $z^* \\in R^d$ is a solution for the VI problem if and only if $F(z^*) = 0$, i.e. $z^* \\in R^d$ is a solution for the fixed point problem.\n\nSince this section takes up half of the page, we are ready to include it in the main part if we have extra space.",
" Dear Reviewers, Area Chairs and Senior Area Chairs!\n\nThank you very much for your work! You really helped make our paper better.\n\nWe have published a revision of our paper in which we have tried to solve most of the issues related to work. All changes are highlighted in blue. What is new:\n\n1) Conclusion. Reviewer **CwZ6** asked to include a conclusion, where we, among other things, have described directions for future works.\n\n2) Typos, rephrases etc. Reviewers **9vze**, **X46K**, **CwZ6**, **1QcY**, **SxRG** found typos or/and gave comments to improve the text. We have taken all comments into account in the revision.\n\n3) Size of figures and captions. We have enlarged the figures and their captions at the request of Reviewers **9vze** and **CwZ6**.\n\n4) Examples of VIs Formalism Breadth (Appendix B). Reviewer **9vze** asked for an additional explanation of the VI formalism for people who are not very familiar with this topic. We made a separate small section - Appendix B (a link to this section is given in line 157 of the revision). Since this section takes up half of the page, we are ready to include it in the main part if we have extra space.\n",
" The authors propose communication-compressed methods for solving a particular class of problems that they refer to as \"variational inequalities\". The authors prevent theoretical guarantees for their method, claim that a naive compressed-gradient based approach will fail in general, and present experiments demonstrating benefit to their technique over a naive compressed-gradient based. ### Major caveat to review\n- I have not had time to go deeply through the mathematical content of this paper, therefore my review will not be able to address this aspect\n- I apologise that my review thus addresses superficial aspects of the paper.\n\n### Strengths\n- the authors address a potentially practically useful problem, and produce both theoretical and experimental results on this topic.\n\n### Weaknesses\n- style violations. The authors greatly reduce the font-size of their figure captions, which as far as I am aware breaks the style rules, allowing them to fit more content into the 9 pages. This may sound a bit harsh, but I think that this is unfair to other authors who follow the rules and I would advocate for rejecting the paper for this reason.\n- the paper has not been properly proof-read. Basic aspects of the technical introduction contain errors. For instance, the saddle point problem is introduced as min-min on line 158 when as far as I am aware it should be min-max. Also on line 221, the symbol q^serv is used twice instead of the proper notation.\n- exposition. The exposition could be much clearer. For instance at line 154, no supporting intuition is offered about the mathematical form of the VI inequality, making it hard for an unfamiliar reader to build a working understanding of this topic.\n\n### Overall\nI want to apologise to the authors that my review is superficial. However, I am being clear about the fact that it is superficial. I want to add that I had a limited amount of time to review the work and ultimately needing to comment on things like style violations reduced the amount of time available to deal with the content. I do not have any questions for the authors Negative societal impact is not an issue here.",
" In this work the authors develop distributed communication-efficient methods for solving i)variational inequalities and ii)saddle point problems. The proposed algorithms (MASHA1 and MASHA2) utilize bidirectionally existing compression operators ( i) unbiased compression operator and ii) contractive compression operator) in order to reduce their communication cost in strongly monotone, monotone and non-monotone regimes. Convergence results are derived and presented for MASHA1 & 2 in all three regimes and numerical experiments showcase the merits suggested by the theory as well as the superior performance of the proposed methods compare to Extra Gradient. Stochastic variants (VR-MASHA1 and VR-MASHA2) with variance reduction are studied as well as connections to Federated Learning. Strengths: This paper studies an interesting problem. Reducing the communication cost for distributed variational inequalities and saddle point problems is relevant in the area of machine learning. The paper provides convergence results for the proposed methods in many different regimes (strongly monotone, monotone and non-monotone operator regimes, deterministic, stochastic etc.). The theoretical results are also supported by illustrative experiments. \n\nWeaknesses: In my perspective the paper has two main weaknesses.\n1. Firstly, the writing of the paper requires a major revision. There are many typos, syntactic errors and sentences that do not make sense throughout the main body and the appendix. As a result many important points of the paper are not clearly conveyed such as the description of Algorithm 1. Further, the appendix needs also to be rewritten more carefully to make it easier to follow. Below I state a few examples that could be revised : \n\nThe sentence in lines 39-42 needs to be rephrased so that it has a clear meaning.\nline 53 : In 'implementations it often advantageous' there is an 'is' missing.\nline 56: Remove 'the' from 'lead to the comparable test error'.\nline 60: Remove 'Since' from the beginning of the sentence.\nline 79: Replace 'due to' with 'since'.\nline 87: Remove ''provably\nlines 130-131: Rephrase 'Moreover, devices can not only have a bad connection, in which one needs to compress data heavily, they can simply disconnect from the learning process'.\nlines 140-143: Need rephrasing.\nlines 209-221: The paragraph explaining MASHA1 needs to be rewritten. There are many typos and syntactic mistakes which make it hard to decipher the description of the algorithm.\nline 228 : Replace 'listing' with 'description'.\nline 230: 'an' is missing in 'Let us find optimal way to choose it'\nline 247 : Rephrase 'Then once can note that MASHA1 O(1/M + 1/β · L/μ) better than the uncompressed extragradient method. We think that the factor O(1/M + 1/β · L/μ) is unimprovable and optimal – see Section A.'\n\n2. While the theoretical results are extensive the originality and technical contribution appears to be limited. Similar results have been obtained with the same compression operators in optimization problems and certain techniques are repetitive. The paper could certainly be improved substantially by more carefully revising the writing and the points mentioned in the 'Weaknesses section'. Specifically the technical contributions and the description of algorithm 1 need to be more carefully rewritten. \n\nAlso as mentioned above the following parts need revising:\nThe sentence in lines 39-42 needs to be rephrased so that it has a clear meaning.\nline 53 : In 'implementations it often advantageous' there is an 'is' missing.\nline 56: Remove 'the' from 'lead to the comparable test error'.\nline 60: Remove 'Since' from the beginning of the sentence.\nline 79: Replace 'due to' with 'since'.\nline 87: Remove ''provably\nlines 130-131: Rephrase 'Moreover, devices can not only have a bad connection, in which one needs to compress data heavily, they can simply disconnect from the learning process'.\nlines 140-143: Need rephrasing.\nlines 209-221: The paragraph explaining MASHA1 needs to be rewritten. There are many typos and syntactic mistakes which make it hard to decipher the description of the algorithm.\nline 228 : Replace 'listing' with 'description'.\nline 230: 'an' is missing in 'Let us find optimal way to choose it'\nline 247 : Rephrase 'Then once can note that MASHA1 O(1/M + 1/β · L/μ) better than the uncompressed extragradient method. We think that the factor O(1/M + 1/β · L/μ) is unimprovable and optimal – see Section A.' The authors addressed the limitations of the paper.",
" The paper proposes approaches to add compression to the communication steps involved in solving variational inequalities in a distributed fashion. The authors propose approaches for both unbiased and contractive compression along with theoretical analysis and two experiments to validate their approaches. Strengths:\n\n1. This appears to be the first work to address the issue of compressed communication in distributed methods for solving variaitional inequalities. Moreover it proposes and analyzes separate schemes for both unbiased and contractive compressors.\n\n2. Results on the synthetic experiment of solving a bilinear saddle point problem clearly show that the proposed approaches require significantly fewer iterations than baselines for convergence.\n\nWeaknesses:\n\n1. The experiment on Adversarial Training of Transformers is not clear at all. Specifically it is unclear if any of the MASHA approaches have been implemented for this since the plots appear to contain only the baselines. Moreover the size of the plots, and the font size of the text in them is extremely small and thus it is very hard to tell what is going on in them.\n\n2. It is not clear if MASHA1 will always reduce the communication cost. Corollary 1 appears to show that MASHA1 will have fewer iterations than the uncompressed case, but each round of iteration can involve upto 2 rounds (each way) of communication between the server and the clients. As the authors state in line 235, the average communication cost per round is ($1/\\beta + 1 - \\tau$) times the communication cost in the uncompressed case. It is not clearly stated if $1/\\beta + 1 - \\tau$ is smaller or larger than 1. If it is larger, then the communication cost would be higher than that of the uncompressed case from what I can tell.\n\n2. The paper appears to be incomplete with no conclusion sections and no discussion of limitations/future work. 1. Please address points 1 and 2 under Weaknesses above.\n\n2. Please include a conclusion section in the paper.\n\n3. The notation for local operators $F_m$ is used in Section 2.3 before it has been defined. Please define notation before using it. N/A",
" In this paper, the authors consider solving distributed variational inequalities (VIs) with compressed communication. For unbiased compressors, they use the variance reduction (VR) technique to control the variance; while for contractive compressors, they further incorporate the error compensation mechanism to handle the bias. They prove convergence rates for the cases where the operator is strongly-monotone, monotone, or non-monotone with Minty's condition, and demonstrate the speedup compared with the uncompressed counterparts. They further show how to extend their methods to the stochastic setting and the federated learning setting with partial participation. ## Strengths\n\n- This is the first work to study communication compression techniques in the setting of distributed VIs. Given the increasing importance of saddle point problems and the necessity of designing communication-efficient algorithms, I think this work should be of interest to the community.\n- The presentation is mostly clear and easy to follow.\n- The results are comprehensive and seem technically sound.\n\n## Weaknesses\n\n- I don't see any serious problem, but as a mostly theoretical paper I feel its algorithmic novelty is somehow limited. For instance, the idea of combining VR techniques and gradient compression seems not new; it has appeared in [1]. Also, the algorithm MASHA1 as well as its analysis resemble those in [2], with the stochastic oracle replaced by the compressed operator.\n\n[1] Xun Qian, Peter Richtárik, and Tong Zhang. Error compensated distributed sgd can be accelerated. arXiv preprint arXiv:2010.00091, 2020.\n\n[2] Ahmet Alacaoglu and Yura Malitsky. Stochastic variance reduction for variational inequality methods. arXiv preprint arXiv:2102.08352, 2021. - Could you please articulate the connections between the proposed methods and the existing ones? In particular, both MASHA1 and MASHA2 seem similar to the FBF method with variance reduction in [2]. Also, the proof starting at line 786 in the Appendix also seems to be related to Lemma 2.4 in the mentioned paper. \n- Minor issues: \n - The sentence from 39-42 is grammatically incorrect. \n - Line 53: \"it often\" -> \"it is often\"\n - Line 106: \"second norm\" -> \"Euclidean norm\"?\n - The fourth line from last in Algorithm 1: \"$w^{k+1}=z^k$\" -> \"$w^{k+1}=z^{k+1}$\"\n - Line 226: It should be $\\mathcal{O}(1/\\epsilon)$ instead of $\\mathcal{O}(1/\\epsilon^2)$?\n - The reference number in the appendix seems to be off by one. Yes, the authors have mentioned the limitations in the paper.",
" This paper designs and analyzes compression algorithms for solving three families of distributed VI/SPP problems. The strengths of the work include extensive theoretical and experimental results highlighting that its designed algorithms outperform existing algorithms for solving VI/SPP. In particular, the results for strongly-monotone, monotone and non-monotone problems are provided. \n\nI don't see any major weaknesses from this paper. However, I would like to point out to typo errors in the paper: \n\n- Line 117-118: \"But often ...\" It is a run-on sentence. \n- Line 125: \"fro details\" -> \"for details\"\n- Line 241: \"in str-monotone case\" -> \"in strongly-monotone case\" N/A N/A "
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
3,
3,
3
] | [
"R8M4qH8bfk0",
"1d-4xGTMqBU",
"sDw1E5xEF02",
"1CKiisvjocm",
"tOrktJC477n",
"lpm6CTA9bXs",
"nips_2022_J0nhRuMkdGf",
"we7XXlFhwA",
"Zq3zWYlcn5_",
"IIQ5LeAi990",
"QSGYF4U2ppB",
"6CfERbe3clX",
"Zq3zWYlcn5_",
"o6KR5q9_xfz",
"nips_2022_J0nhRuMkdGf",
"nips_2022_J0nhRuMkdGf",
"nips_2022_J0nhRuMkdGf",
"nips_2022_J0nhRuMkdGf",
"nips_2022_J0nhRuMkdGf",
"nips_2022_J0nhRuMkdGf"
] |
nips_2022_yfNSUQ3yRo | Noise Attention Learning: Enhancing Noise Robustness by Gradient Scaling | Machine learning has been highly successful in data-driven applications but is often hampered when the data contains noise, especially label noise. When trained on noisy labels, deep neural networks tend to fit all noisy labels, resulting in poor generalization. To handle this problem, a common idea is to force the model to fit only clean samples rather than mislabeled ones. In this paper, we propose a simple yet effective method that automatically distinguishes the mislabeled samples and prevents the model from memorizing them, named Noise Attention Learning. In our method, we introduce an attention branch to produce attention weights based on representations of samples. This attention branch is learned to divide the samples according to the predictive power in their representations. We design the corresponding loss function that incorporates the attention weights for training the model without affecting the original learning direction. Empirical results show that most of the mislabeled samples yield significantly lower weights than the clean ones. Furthermore, our theoretical analysis shows that the gradients of training samples are dynamically scaled by the attention weights, implicitly preventing memorization of the mislabeled samples. Experimental results on two benchmarks (CIFAR-10 and CIFAR-100) with simulated label noise and three real-world noisy datasets (ANIMAL-10N, Clothing1M and Webvision) demonstrate that our approach outperforms state-of-the-art methods.
| Accept | The work proposes a simple method for training an 'attention layer' that can give weights for different input samples. These weights are learned during the training process. This method appears to the theoretically justified, the method relatively simple and the empirical results seem reasonable. One concern that I share with the reviewers is about the hard (but correctly labeled) samples. The authors provide some explanation and results in the appendix, which mostly allay the concerns. I have to agree that introducing a somewhat sensitive parameter $\lambda$ is not ideal, but overall the empirical and theoretical justifications tip the balance towards acceptance in this case. | train | [
"KVZ5sb035HX",
"eg5pomG1Bci",
"8kNH9WD9C3t",
"hz0fw4dUzM",
"2HafN27-d5z",
"V4oDBR6Eg8S",
"0ssD9X9bx-",
"SXUFbYjdGAU",
"cJM2yeFYHFB"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. We have updated the Appendix for more experiments on hard samples. In Appendix F, we provide the movement of $\\tau$ distribution in the early learning stage to empirically explain how hard samples can be learned by using the proposed method.",
" I appreciate the authors' response to my questions which would help readers' understanding of the paper. Having read all the responses to my and other reviewers' review, there is no further question I would like to ask except for an additional experiment result on hard samples (say, for clean samples that contain high loss in the early learning phase, how does NAL work during training?)\n\nIn this regard, I would like to keep my review score up to now.",
" Thanks for the constructive comments from the reviewers!\n\nOur major contribution is to use an extra branch and propose a specific loss function to learn weights for training samples. The designed loss scales the gradients of the samples according to the learned weights, thus preventing the memorization of noisy labels in training. However, the significance of this contribution, as recognized by Reviewer fZY5 and BcSR, may be a little bit underrated by Reviewer umW7.\n\nThe reviewers' questions can be summarized as follows:\n1. Selection of hyperparameters. \n2. Evaluation details.\n3. Discussion on hard samples. \n4. Comparison with other related methods and instance-dependent noise.\n\nWe have answered the above points in our response. Please take a look at them.\n\nRegards,",
" Thank you for comments and questions. We address them below.\n\n**Q1.** **The purpose of Lemma 4.1.** \n**A1.** Lemma 4.1 is used to better explain the gradient scaling effect in Theorem 4.1. \n\n**Q2.** **How does the proposed method handle samples with different difficulty levels?** \n**A2.** We assume the training set consists of mislabeled samples, easy (clean) samples and hard (clean) samples. In the early learning phase, both mislabeled samples and hard samples have lower weights compared to the easy samples, which forces the model to learn only from easy samples due to gradient scaling (Theorem 4.1). After the model has learned from easy samples for a while, the model then learns from the hard samples rather the mislabeled samples. This is because the hard samples are more connected to the easy samples than the mislabeled samples, as the former two share some common features. Therefore, the proposed method can continuously learn from clean samples (both easy and hard) and prevent the memorization of mislabeled samples. In Figure 2, we can observe that our weight distribution has less overlap between clean and mislabeled samples compared to directly using the loss distribution [1].\n\n**Q3.** **Is the noise attention loss used for the model training alone or in conjunction with the cross entropy loss?** \n**A3.** We use the noise attention loss alone in model training, as shown in Algorithm 1.\n\n**Q4.** **About notations.** \n**A4.** 1) The $j$ in Sec 3.3 indicates the $j$-th epoch, which is from epoch 1 to epoch $m$. 2) The $j$ in Lemma 4.1 represents $j$-th entry of the prediction (i.e. $p_{j}$) or the ground truth distribution (i.e. $q_{j}$ ) or the logits (i.e., $z_{j}$). 3) $\\hat{t}$ is a typo, we only have $t$ as defined in Sec 3.3. We will correct it and make the notation clearer in the new version.\n\n[1] Arazo, Eric, et al. \"Unsupervised label noise modeling and loss correction.\" ICML 2019. ",
" Thank you for an insightful review and questions. Please see below for answers to your questions.\n\n**Q1. About hard samples.** \n**A1.** We assume the training set consists of mislabeled samples, easy (clean) samples and hard (clean) samples. In this work, we do not design a specific technique to distinguish hard samples but the proposed method can handle hard samples well. In the early learning phase, both mislabeled samples and hard samples have lower weights compared to the easy samples, which forces the model to learn only from easy samples due to gradient scaling (Theorem 4.1). After the model has learned from easy samples for a while, the model then gradually learns from hard samples rather than the mislabeled samples. This is because the hard samples are more connected to the easy samples than the mislabeled samples, as the former two share some common features. Therefore, the proposed method can continuously learn from clean samples (both easy and hard) and prevent the memorization of mislabeled samples. In Figure 2, we can observe less overlap between clean and mislabeled samples compared to using the loss distribution [1]. Studying hard samples is an important task in noisy label scenarios. Similar procedures have been proposed to distinguish the hard samples in [2,3], i.e., training the classifiers only on easy samples, then using the classifiers to further distinguish the hard samples from the mislabeled samples.\n\n**Q2. Some clean samples are given low weights, and some mislabeled samples are given high weights.** \n**A2.** First, the clean samples given low weights are actually hard (clean) samples. As we discussed in A1, a classifier trained only on easy samples is able to discriminate hard samples from mislabeled samples. Since CIFAR-10 with 40% noise contain more easy samples than 60% noise, the classifier trained on 40% noise has stronger capability to discriminate hard samples than the classifier trained on 60% noise. Therefore, in Figure 2, we can observe that in CIFAR-10 with 40% noise, less clean samples have lower weights compared to 60% noise. Second, for all noisy cases in Figure 2, there exist mislabeled samples given high weights. We investigate some samples and find that these mislabeled samples are hard samples with \"proper\" wrong labels. For example, given a hard sample $x_{\\text{h}}$: a cat that looks like a dog due to blurry resolution (e.g. 32 $\\times$ 32). When generating simulated label noise, it is likely to assign a dog label for $x_{\\text{h}}$. Compared to other mislabeled samples, this kind of mislabeled sample can be learned easily by the classifier, resulting in a high weight. \n\n**Q3. Combining NAL with MixUp.** \n**A3.** MixUp can be regarded as a strong augmentation and regularization technique. Theoretically, combining NAL with MixUp does not guarantee performance gains. Empirically, our experiments show performance improvements when using MixUp, especially at a high noise level. Below are the results on CIFAR-10. \n| Methods | sym 40% | sym 80% |\n|:-:|:-:|:-:|\n| NAL (ours) | 93.49 ± 0.07 | 80.98 ± 0.27 |\n| NAL (ours) + Mixup | 93.85 ± 0.11 | 83.71 ± 0.52 |\n\n(Note that the reason we do not use MixUp is for fair comparison with other methods that only modify the loss.)\n\n**Q4. Comparison with FINE and PES.** \n**A4.** The results are collected by running their official code and the backbone is ResNet34 for consistency. \n\n| Datasets | Methods | sym 20% | sym 50% | sym 80% | asym 40% |\n|:-:|:-:|:-:|:-:|:-:|:-:|\n| CIFAR-10 | FINE | 91.0 ± 0.1 | 87.3 ± 0.2 | 69.4 ± 1.1 | 89.5 ± 0.1 |\n| CIFAR-10 | PES | 92.3 ± 0.3 | 86.5 ± 0.5 | 28.0 ± 2.7 | 89.9 ± 0.6 |\n| CIFAR-10 | NAL (ours) | 94.4 ± 0.0 | 91.9 ± 0.1 | 81.0 ± 0.3 | 92.1 ± 0.1 |\n| CIFAR-100 | FINE | 70.3 ± 0.2 | 64.2 ± 0.5 | 25.6 ± 1.2 | 61.7 ± 1.0 |\n| CIFAR-100 | PES | 68.9 ± 0.5 | 58.9 ± 2.7 | 15.4 ± 3.5 | 63.3 ± 1.2 |\n| CIFAR-100 | NAL (ours) | 77.8 ± 0.3 | 70.1 ± 0.3 | 36.8 ± 0.7 | 74.7 ± 0.1 |\n\n**Q5. Performance on instance-dependent noise.** \n**A5.** We evaluate the proposed method using PMD instance-dependent noise from PLC [4]. For consistency with the PLC, the backbone is PreActResNet34.\n| Datasets | Methods | Type I (35%) | Type II (35%) | Type III (35%) |\n|:-:|:-:|:-:|:-:|:-:|\n| CIFAR-10 | PLC | 82.80 ± 0.27 | 81.54 ± 0.47 | 81.50 ± 0.50 |\n| CIFAR-10 | NAL (ours) | 88.81 ± 0.13 | 87.66 ± 0.23 | 88.57 ± 0.16|\n| CIFAR-100 | PLC | 60.01 ± 0.43 | 63.68 ± 0.29 | 63.68 ± 0.29 |\n| CIFAR-100 | NAL (ours) | 66.55 ± 0.16| 67.15 ± 0.11| 66.59 ± 0.40|\n\n[1] Arazo, Eric, et al. \"Unsupervised label noise modeling and loss correction.\" ICML 2019. \n[2] Bai, Yingbin, et al. \"Me-momentum: Extracting hard confident examples from noisily labeled data.\" ICCV 2021. \n[3] Cordeiro, Filipe R., et al. \"PropMix: Hard Sample Filtering and Proportional MixUp for Learning with Noisy Labels.\" BMVC 2021. \n[4] Zhang, Yikai, et al. \"Learning with feature-dependent label noise: A progressive approach.\" ICLR 2021.",
" Thank you for a thoughtful review and questions. Please see below for answers to your questions.\n\n**Q1.** **Selection of hyperparameters.** \n**A1.** First, the sensitivity to $\\alpha$ is quite mild as shown in Figure 4(f). We use a fixed $\\alpha=0.9$ for all datasets. As for $\\lambda$, its optimal value does depend on the complexity of the dataset. In Figure 4(e), the accuracy results using very small $\\lambda$s (i.e. 0, 0.01, and 0.05) are only to demonstrate the effect of boost term $\\mathcal{L}_{\\text{b}}$. In our experiments, we perform a grid search of $\\lambda$ from [0.1, 0.5, 5, 10, 50] using a noisy validation set sampled from the noisy training set (Note that the reason why a noisy validation set works has been empirically explored in [1] and theoretically proved in [2]). Therefore, we can still easily find the optimal $\\lambda$ for different datasets, since it is the only hyperparameter that needs to be tuned.\n\n**Q2.** **Inconsistent baselines for different noises**. \n**A2.** The datasets and noise assumptions used for evaluation vary in papers. The approaches (e.g. NLNL [3] and DAC [4]) that simply modify the training loss are usually evaluated on simulated label noise, while only recently proposed methods (e.g. ELR [1], FINE [5], and Nested [6]) have been further evaluated on one or two real-world noisy datasets. In contrast, our experiments broadly cover three real-world noisy datasets to demonstrate that the proposed method can provide substantial improvements in different label noises. Therefore, you may observe inconsistencies in the baselines of different datasets in our paper, which also commonly happens in existing works [1,5,6].\n\n**Q3.** **Some accuracy results have variance while some don't**. \n**A3.** Generally, whether the result for a certain dataset have variance depends on two conditions: i) The noisy labels are fixed; ii) A validation set is provided. Only if both conditions are met, the result can have no variance. According to the dataset information, we have the following table:\n| Datasets/Conditions | Fixed noisy labels? | Validation set is provided? | Results have variance? |\n|:--:|:-:|:-:|:-:|\n| CIFAR with simulated noise | No | No | Yes |\n| Animal-10N | Yes | No | Yes |\n| Clothing1M | Yes | Yes | No |\n| Webvision | Yes | Yes | No | \n\nIn our experiments, randomly splitting a noisy validation set or randomly generating label noise requires multiple runs to obtain the average results and its standard deviation. Therefore, the results on CIFAR and Animal-10N have variance, while the results on Clothing1M and Webvision usually don't have variance, which is consistent with most existing works [1,5,6]. Note that some of the two CIFAR results are taken from their original papers, including Joint Opt, NLNL, DAC and SAT [7]. Because the results in their original papers do not have variance, their results in Table 1 also have no variance. The reason we take the results directly from the original paper is that either the paper does not provide the code, or the provided code could not achieve the results reported in the original paper. For example, we run the official code of SAT, the results on CIFAR-10 are below:\n\n| Source | sym 20% | sym 40% | sym 60% | sym 80% |\n|:-:|:-:|:-:|:-:|:-:|\n| From original paper | 94.14 | 92.64 | 89.23 | 78.58 |\n| Our re-run results | 93.92 ± 0.07 | 92.56 ± 0.23 | 89.14 ± 0.25 | 74.84 ± 1.92 |\n\nIt can be observed that on CIFAR-10 with sym 80% noise, our re-run results are much lower than the reported results. Therefore, we chose to take the results directly from the original papers.\n\n[1] Liu, Sheng, et al. \"Early-learning regularization prevents memorization of noisy labels.\" NeurIPS 2020. \n[2] Chen, Pengfei, et al. \"Robustness of accuracy metric and its inspirations in learning with noisy labels.\" AAAI 2021. \n[3] Kim, Youngdong, et al. \"Nlnl: Negative learning for noisy labels.\" ICCV 2019. \n[4] Thulasidasan, Sunil, et al. \"Combating Label Noise in Deep Learning using Abstention.\" ICML 2019. \n[5] Kim, Taehyeon, et al. \"Fine samples for learning with noisy labels.\" NeurIPS 2021. \n[6] Chen, Yingyi, et al. \"Boosting co-teaching with compression regularization for label noise.\" CVPR 2021. \n[7] Huang, Lang, et al. \"Self-adaptive training: beyond empirical risk minimization.\" NeurIPS 2020. \n",
" This paper suggests attention-based method to distinguish mislabeled data and decrease their impact during training. Specifically, the authors add an attention branch at the end of the feature-extraction layer (a.k.a. after the penultimate layer) that outputs a confidence score $\\tau$. Then, the attention layer's confidence score $\\tau$ is incorporated into the loss function to softly divide samples into clean ones and mislabeled ones; the gradient of clean ones is trained as-is due to their early learning phenomenon, while that of mislabeled ones is not trained because $\\tau$ is also trained to decrease the impact of mislabeled ones. The paper supports their idea both theoretically and empirically. ## Strength\n\nThe strength of this paper can be elaborated as follows:\n1. The writing is easy and clear to follow.\n2. Incorporation of an attention layer and temporal ensembling-based target estimation is well-motivated and harmonious.\n3. Theoretical justification and empirical support are sound.\n\n## Weakness \n\n1. A strong weakness is the use of two hyperparameters $\\lambda$ and $\\alpha$. Especially, $\\lambda$ should be manually decided for each dataset, which is burdensome and hence becomes a strong hurdle for the suggested method.\n2. The baselines in the experiments are inconsistent, possibly due to the copy of experimental results from original papers. \n3. Why accuracy results on some of the two CIFAR datasets, Clothing 1M, and Webvision do not have variance results? They should be consistently reported. The questions would be about how to deal with the above weakness points. No critical limitation is found up to this point.",
" This paper presents a method for training models with noisy labels. The proposed method learns a weight for each training sample based on its noise level (measured by the difference between model prediction and label), using an attention branch. Experimental results on benchmark datasets show that the proposed method was able to improve the model performance compared to several noisy training approaches. __Strengths__\n- The idea is simple and experimental results are promising.\n\n__Weaknesses__\n- The theoretical justification is weak. Lemma 4.1 is essentially saying noisy labels hurts the model that trained with cross entropy loss. This is a well-known theory. I don't see how make it as a Lemma helps the discussion in this paper. 1. All the proposed techniques in this paper are built based upon the observation that noisy samples have higher loss compared to clean samples. How does the proposed method handle samples with different difficulty levels?\n2. It is unclear if the noise attention loss is used for the model training alone or in conjunction with the cross entropy loss.\n3. Math notation needs to be improved. For example, $j$ is never defined in Sec 3.3 and it is used represent different things in Lemma 4.1 and Theorem 4.1. What is the difference between $t$ and $\\hat{t}$ in Algorithm 1?\n No",
" The authors proposed a simple method that trains an attention layer to generate weights for different samples. By using neural networks’ memorization effects, the weights can be learned during the training process. This method simplifies the training process, which does not need to extract confident examples. The authors also give theoretical analyses of their method. The proposed method experimentally shows large improvements compared with baselines. 1. Training an attention layer, directly outputting a score, is novel. \n2. The proposed method is simple and empirical works well.\n3. This paper is well organized and easy to understand.\n 1. Although the empirical results are good, could the authors simply explain how this proposed attention branch distinguishes mislabelled samples and some hard examples that are not learned in the early learning phase?\n2. For CIFAR-10 with 60% in figure 2, I found some noise is given with high weights, and some clean data is given low weights. Can you explain it? \n3. I found the performance is lower than some state-of-the-art methods, and the authors claim that it does not employ some techniques like MixUp, or two networks. Does the proposed method combine well with these widely used techniques in learning with label noise, for example, MixUp? \n 1. From my perspective, the word “attention” looks a little ambiguous. It more seems like a confidence score for each sample. \n2. The proposed method builds on semi-technique ensemble predictions. So, I think the authors should compare it with state-of-the-art methods that also use semi-technique e.g., DivideMix, FINE[1], or PES [2], which makes it easy to compare. \n3. The authors only conduct symmetric label noise and asymmetric noise. Some commonly used settings like Pairflip 45% and Instance-dependent label noise [3, 4] are missing. \n4. The hyperparameter lambda is sensitive and has a large range for different datasets. \n\n[1] FINE Samples for Learning with Noisy Labels\n[2] Understanding and Improving Early Stopping for Learning with Noisy Labels \n[3] LEARNING WITH FEATURE-DEPENDENT LABEL NOISE: A PROGRESSIVE APPROACH\n[4] Part-dependent Label Noise: Towards Instance-dependent Label Noise Xiaobo\n"
] | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"eg5pomG1Bci",
"V4oDBR6Eg8S",
"nips_2022_yfNSUQ3yRo",
"SXUFbYjdGAU",
"cJM2yeFYHFB",
"0ssD9X9bx-",
"nips_2022_yfNSUQ3yRo",
"nips_2022_yfNSUQ3yRo",
"nips_2022_yfNSUQ3yRo"
] |
nips_2022_G1uywu6vNZe | Exponential Family Model-Based Reinforcement Learning via Score Matching | We propose an optimistic model-based algorithm, dubbed SMRL, for finite-horizon episodic reinforcement learning (RL) when the transition model is specified by exponential family distributions with $d$ parameters and the reward is bounded and known. SMRL uses score matching, an unnormalized density estimation technique that enables efficient estimation of the model parameter by ridge regression. Under standard regularity assumptions, SMRL achieves $\tilde O(d\sqrt{H^3T})$ online regret, where $H$ is the length of each episode and $T$ is the total number of interactions (ignoring polynomial dependence on structural scale parameters). | Accept | This is a clear and carefully written paper with a solid mathematical contribution. The reviewers are unanimous in supporting acceptance. | train | [
"YSxYwhAQAY",
"eEtwWFFlon",
"60cy9tXD_tI",
"LlnZuqanrDg",
"_Jdkh9zxYaz",
"qfXhBg79DY",
"bH5hau_iiPs",
"HI7NAiRyDLT",
"AX_qmGtfhhl"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed response. It'll be interesting to see how this idea works out in practice. I'm satisfied with the response and have updated my score.",
" After reading the other reviewers' comments, I will keep my score unchanged.",
" Thanks for your detailed response.\nGiven that my concerns have been addressed, I have updated my score. ",
" We thank the reviewer for their comments and time. \n\n*To address the empirical performance*, we refer the reviewer to our response to Reviewer rkgm (so as not to be repetitive). \n\nAnswering the questions below:\n\n1. **Is there a way to express this concentration result without vectorization?** The short answer is we do not know how to do this, but we believe it is an interesting direction for future work. We make two comments. \n\n a. We agree that a more general result would be to assume that the transition models follow the form $\\exp(\\langle \\theta, \\phi(s,a,s’)\\rangle )$ where $\\phi$ is some known feature mapping. This is a finite dimensional version of the exponential family introduced by [1]. However, our theoretical guarantees depend on the bilinear form, and it is unclear how to extend score matching to work in the general setting. The terms that appear in our score matching loss (see, e.g., Thm 1) depend on this bilinear structure. For example, in $\\hat{V}_n$, we take derivatives with respect to the $\\psi$ mapping. \n\n b. Due to vectorization, we achieve a concentration guarantee for Frobenius norm. This occurs because our score matching estimator is solving ridge regression over the vectorized parameters. Note that the Exp-UCRL also obtains a Frobenius norm guarantee; while LC3 obtains a spectral norm guarantee (for nonlinear dynamical systems). It may be possible to prove a stronger regret guarantee for exponential family transitions that relies on spectral norm concentration.\n2. **Explaining line 184 and Assumption 2.** To provide some intuition, it is helpful to compare to the standard setup of linear regression. Roughly speaking, one can view the $\\Phi_t$ as covariates and the $\\xi_t$ as the response. (Notice that $\\Phi_t$ contains the information about the current $(s_t,a_t)$ pair, while $\\xi_t$ contains information about the next state $s_{t+1}$.) However, score matching is different from linear regression because the “covariate matrix” $V_n$ contains an matrix $C$ which captures the “curvature” of the $\\psi$ mapping. \nOn a more technical level, in the proof of the concentration guarantee, we control the quantity $\\hat{b}_n + \\hat{V}_n \\mathrm{vec}(W_0) = \\sum \\Phi_t (\\xi_t + C_t W_0 \\Phi_t)$. The term $\\xi_t + C_t W_0 \\Phi_t$ can be interpreted as the “error” term, which we assume is subgaussian in order to control (using (A) and (B)); however, in order to apply the self-normalized martingale guarantee we require (C) in order to change the matrix norm in Eq. 9. As for assumption (D), it is used to relate the KL divergence of two models to the distance between parameters in the regret analysis.\nIn our revision, we will improve the discussion for these quantities.\n3. **A tighter regret for updating during episode.** Yes, one might be able to achieve tighter regret guarantees if the concentration guarantee (Thm 2) is applied at every step in the episode. The downside is a higher computational burden. Both the score matching procedure and the planning procedure (which is hard to begin with and practically must be approximated) needs to be called $KH$ times instead of $H$ times.\n\n[1] Canu and Smola. “Kernel methods and the exponential family.”\n",
" We thank the reviewer for their comments and time, and have no corrections or objections. \n\n*To address the empirical performance*, we refer the reviewer to our response to Reviewer rkgm (so as not to be repetitive).\n",
" We thank the reviewer for their comments and time. \n\n**Regarding experimental results.** We are indeed interested in seeing how these ideas play out in practice. We have successfully experimented with the estimation component, showing that score matching can indeed efficiently recover parameters for exponential family models - in particular, we can estimate models which *go beyond* nonlinear dynamical systems due to differences in the $q$ and $\\psi$ functions, as we claim in the paper. To see improvements in an actual RL problem, we need to combine this with a practical planning procedure (as with other approaches) and an interesting transition model, both of which require significant domain expertise. We are now working with roboticists on stochastic control problems which can showcase the benefits of using SMRL (with a more expressive density) over the LC3 approach (i.e., fitting a nonlinear dynamical system). This is a complex project, and in the meanwhile we hope that our ideas and theoretical methods, like other theoretical developments in RL, can inspire also other practitioners, as well as lead to further theoretical progress.\n\nAnswering the (other) questions below:\n\n**What to do if the transition model doesn’t belong to the exponential family distribution? Can we approximate such transition models which do not belong to the exponential family?** Our theoretical results hold for the so-called “realizable” setting, where we assume the ground truth model lies in some model class (Definition 1). A more reasonable setting would be the “misspecified” or “agnostic” setting, where we assume that the transition model only approximates reality up to some “error”. \n\nGenerally, understanding what the right notion of “error” is for model-based RL is a challenging open problem, not just for our setting. One example is the well-studied linear MDP [1,2]. The paper [2] shows that their algorithm LSVI-UCB adapts to misspecification in total variational distance to achieve $\\mathrm{poly}(d,H,T)$ regret (see their Thm 3.2). However, if one weakens the notion of “error” to an $\\ell_\\infty$ notion of error, then the paper [3] establishes exponential lower bounds.\n\nWe agree that it would be interesting to establish theoretical guarantees for SMRL which hold under misspecification, e.g., in TV distance. We leave this to future work.\n\nIn practice, one can always run SMRL even in the presence of misspecification. In fact, control theorists have had tremendous success modeling complicated nonlinear systems as linear dynamical systems for decades! The appeal of modeling via exponential families (Definition 1) vs. just using linear dynamical systems is that (1) as a richer class, they can model more complicated densities (2) via score matching, they can still be estimated efficiently.\n\n[1] Yang and Wang. “Reinforcement learning in feature space: Matrix bandit, kernels, and regret bound.” \n[2] Jin, Yang, Wang, and Jordan. “Provably efficient reinforcement learning with linear function approximation.” \n[3] Du, Kakade, Wang, and Yang. “Is a good representation sufficient for sample efficient reinforcement learning?”.\n",
" This paper presents a new model-based algorithm, called SMRL, for finite horizon episodic MDPs where the transition model is specified by exponential family distribution. Essentially, the work builds on Exp-UCRL [1] and proposes to use score matching instead of MLE to estimate the parameters of the transition model, which helps it run efficiently by eliminating the need to estimate the computationally expensive log partition function required in MLE. The proposed algorithm matches the regret bound offered by Exp-UCRL while being computationally efficient. The paper also presents theoretical proof for their efficient algorithm. Strengths:\n- Strong theoretical justification.\n- The language of the paper is fine apart from some typos and uncommon abbreviations.\n\nWeaknesses:\n- The assumption that transition model belongs to exponential family may be limiting for many real-world problems.\n- Although it is a theory paper, I would have liked to see how does the algorithm performs empirically vs Exp-UCRL.\n Q1: How can we handle the problems for which the transition model does not belong to the exponential family distribution? \nQ2: Can we approximate such transition models which do not belong to exponential family? The main limitations is that the algorithm works on specific problems which satisfies the assumptions on transition and reward models, which limits the application of the algorithm.",
" This paper studies model-based reinforcement learning for episodic MDP whose transition model is parametrized by exponential families with features of state and action. \nTo estimate the model parameter, the author uses the score matching technique to minimize the expected squared distance between the score functions. \nTo promote exploration, the author utilizes the optimistic planning.\nThe author states that under some regularity assumptions, the suggested algorithm achieves $\\tilde{\\mathcal{O}}(d \\sqrt{H^3 T})$. I think their research is relevant to RL community since they focus on how to design a provably and efficient algorithm for the nonlinear environment.\nBeyond the existing work based on the linearity assumption for transition models or MDPs, as an extension of Exp-UCRL [10], which dealt with the problem when the transition models are parametrized by exponential families, the author proposes a more efficient method for estimating model parameters. \nBased on the score matching technique, it can be more efficient than the previous method because it does not need to estimate the log-partition $Z_{sa}$.\nAlso, I think the analysis of the proposed algorithm is sound and the writing is clear.\n\nHowever, in my opinion, the most important part of score matching relies on how to formulate the Fisher divergence (eq 3) into an empirical score matching loss (eq SM-L). \nThis result is presented in Theorem 1 and since this result is from [4], I carefully consider whether the result of this paper has significant novelty\n\nAlso, in Theorem 2, it presents the result of a self-normalized concentration guarantee when the parameter to be estimated $\\hat{W}$ is vectorized.\nI think this kind of result can be obtained if the problem setting $\\exp(\\langle \\psi, W \\phi \\rangle)$ is replaced with $\\exp(\\langle vec(W), vec(\\psi \\phi^\\top) \\rangle)$.\nIf so, I think the bilinear structure disappears. Is there a way to express this concentration result without vectorization?\n (1) Before introducing Assumption 2, $\\Phi(s,a), C(s'), \\xi(s')$ are defined on line 184. If the author could explain these functions in more detail, it would be helpful to understand the Assumption 2.\n\n(2) The current model parameter $\\hat{W}_k$ is updated after the episode is over. However, since the current problem is a model-based setting, I think it is possible to update the model parameter on every horizon because the agent receives transition feedback on every horizon. If the agent can construct a confidence set for the true model parameter every horizon without considering the computation issue for planning, I think the regret bound might be tighter. I wonder what the author thinks about this.\n\nminor typos\n1. Line 566: I think it would be better to unify the numbering of Assumption 2 and the numbering in the appendix.\n2. Line 579: $\\hat{V}_n + \\lambda I \\succeq \\lambda I$\n3. Line 615: I think \"Under Definition 1 and 2\" should be fixed to \"Under Definition 1 and Assumption 2\" I think there are no issues related to social impact. However, although this paper is highly related to theoretical part, considering that many recently published theoretical papers about model-based RL also present numerical experiments, I think it would be better if there is an experimental result in this paper. ",
" The paper proposes a novel method for estimating the transition model of an MDP parametrized by exponential functions, in the episodic finite-horizon model-based RL setting with bounded and known rewards. The method uses score matching that reduces to ridge regression of the known parameters of the exponential parametrization, thus eliminating the difficulties associated with estimating the partition function in MLE-based methods. An estimate of the online regret for the new algorithm is derived, too. The main contribution of the paper is probably the idea to apply score matching to the exponential parametrization proposed earlier, resulting in a more efficient estimation algorithm. This is a very non-obvious and original advance, and given that the investigated parametrization subsumes a very large class of systems encountered in practice, the practical significance of this advance is likely to be high. However, the paper is entirely theoretical, and it is difficult to understand the computational advantages of the proposed method without at least some kind of empirical evaluation. What are the actually achievable advantages of the proposed method in comparison with a reasonable baseline, for example MLE? Section 5 provides a comparison in general terms, but there is no empirical verification. Maybe such a verification on prototypical test MDPs would be useful in illustrating the analysis? The authors have addressed limitations adequately, for example they have conscientiously pointed out that their analysis does not cover LQR problems, due to their unbounded costs. I do not see any potential negative societal impacts of this work.\n\nMinor typos:\nP.4 L.136,138: \"gaussian\" -> \"Gaussian\""
] | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"qfXhBg79DY",
"_Jdkh9zxYaz",
"LlnZuqanrDg",
"HI7NAiRyDLT",
"AX_qmGtfhhl",
"bH5hau_iiPs",
"nips_2022_G1uywu6vNZe",
"nips_2022_G1uywu6vNZe",
"nips_2022_G1uywu6vNZe"
] |